Showing posts with label API Testing. Show all posts
Showing posts with label API Testing. Show all posts

Monday, June 16, 2025

Solution Approach - How to Automate to Use the Magic Link for Login?

 

The question asked in The Test Chat community on 14th June 2025 22:42,

Hi All,
Has anyone implemented test automation for an application that uses a magic link for login?
If yes, could you please share the approach you followed?
Thanks in advance🙏🙏


My Analysis and Understanding

  1. The question asked is around automation.
  2. That is, how to automate to use the link (hypertext) generated for login.
  3. There are several ways to approach this, via
    1. GUI
    2. API
    3. Combination of GUI and API
  4. I prefer to use it in combination of GUI and API for first without using any email client services and app.
    • Why?
      • I want to keep it simple, quick and not to complicate the testing problems much more.


Solution and Approach

I see, it is not a complicated Test Engineering problem unless the project and floor do not support.


I understand, it is about using a link (hypertext) to login into a application via automation.

How do you get this hypertext?
  • It should be generated somewhere within the system's boundary.
  • Can you get that generated hypertext without using any mail client and its service points? How?
    • Do you have an endpoint that generates this hypertext?
    • If so, then this endpoint should also have some means to share the generated hypertext for login request?
    • Pick this generated hypertext via an API request in your automation.
  • This should work!  Unless the test goal in the automation is not to open some configured mail client and read the received mail.
    • But, how many times will you do this in automation, if the goal is to open configured mail client?
    • Is this a good use of automation?
If there is an endpoint to GET that generated hypertext (POST), it is a fair simple task.  

Say, there is no endpoint with GET request to give the generated hypertext. Then, look at the response payload of POST request that generates the login hypertext.   Consume this hypertext now and use it in GUI automation to login.  Say, there is no response payload in POST request.  Ask your team to build a GET request to consume internally and maintain it securely so that no external access is allowed.

Let us keep the testing problems simple!

 

Saturday, February 3, 2024

Performance Testing - The Unusual Ignorance in Practice & Culture

 

I'm continuing to share my experiences and learning for100 Days of Skilled Testing series.  I want to keep it short and as a mini blog posts.  If you see, the detailed insights and conversations needed, let us get in touch.


The ninth question from season two of  100 Days of Skilled Testing is

What are some common mistakes you see people making while doing performance testing?  How do they avoid it?


Mistakes or Ignorance?

It is mistake when I do an action though I'm aware that it is not right in the context.

I do not want to label what I share in this blog post as mistake.  But, I call it as ignorance despite having or not having the awareness, and the experience.

The ignorance said here is not just tied to the SDLC.  It is also tied to the organization's practice and culture that can create problems.

To this blog post's context, I categorize the ignorance in these categories -- Practitioner and Organization.

  1. Practitioner's ignorance
    • Not understanding the performance, performance engineering, and performance testing
      • When said performance testing, taking it as - "It is load testing"
      • No awareness on what is performance and performance engineering
        • Going to the tools immediately to solve the problem while not knowing what is the performance problem statement
      • Be it web, API, mobile or anything,
        • Going to one tool or tools and running tests
      • No much thinking on how to design the tests in the performance testing being done
      • Ignoring Math and Statistics, and its importance in Performance analysis
      • No idea on the system's architecture, and how it works
        • Why it is the way it is?
      • The idea of end-to-end is extended and used in testing for performance and having hard time to understand and interpret the performance data
        • How many end-to-end your tests have identified?
        • Can we test for performance to all these identified and unidentified end-to-end?
      • Relying on the resource/content in internet and applying or using it in one's context without understanding it
      • No idea on the tech stack and how to utilize the testability offered by it in evaluating the performance
      • Not using or asking for testability
      • Getting hung to most spoken and discussed 2 or 3 tools on the internet
      • Applying tools and calling out it as performance testing
      • No attempting to understand the infrastructure and resources
        • How it impacts and influences the performance evaluation and its data
      • Idea on Saturation of resources
        • Thinking it as a problem
        • Thinking it as not a problem
      • Not working to identify where will be the next bottleneck when solving a current bottleneck
      • What to measure?
      • How to measure?
      • When to measure?
      • What to look when measuring?
      • Not understanding the OS, Hardware resources, Tech Stacks, Libraries, Frameworks, Programming Language, CPU & Cores, Network, Orchestration, and more
      • Not knowing the tool and what it offers
        • I learn the tool everyday; today, it is not the same to me compared to yesterday
          • I discover something new that I was not aware of what it offered and exist
          • I learn the new ways of using the tool in different approaches
      • No story in the report with information/image that is self-describable to most who reads it
      • And, more; but the above said resonates with most of us
  2. Organization's ignorance
    • At the org level, for first and to start, it is ignorance in Performance Engineering
      • Ignoring the practice of performance engineering in what is built and deployed
      • Thinking and advocating, increasing the hardware resources will increase and better the performance
        • In fact, it will deteriorate over a period of time no matter how much the resources are scaled up and added
      • Ignoring the performance evaluation and its presence in CI-CD pipeline
      • The performance tests on CI-CD pipeline should not take beyond few minutes
        • What is that "few minutes"?
      • Not prioritizing the importance of having the requirements for Performance Engineering

Recently, I was asked a question - How to evaluate the login performance of a mobile app using a tool "x"?

In another case, I see, a controller having all HTTP requests made when using web browser. Running these requests and trying to learn the numbers using a tool.


I do not say this is wrong way of doing.  That is a start.

But, we should NOT stay here thinking this is a performance engineering and that is how to run tests for learning a performance aspect[s].


To end, the performance is not just - how [why, when, what, where] fast or slow?  If that is your definition, you are not wrong!  That is a start and good for start; but, do not stick on to it alone and call performance.   It is capability.  It is about getting what I want in the way I have been promised and I expect; this is contextual, subjective and relative.  The capability leads to an experience.  What is that experience experienced?

Sometimes, serving the requests by what you call as slow, is a performance.  What is slow, here?

The words fast and slow are subjective, contextual and relative.  It is one small part of performance engineering.

That said, let me know, what have you been ignoring and unaware in practice of Performance Engineering & Testing?


Saturday, November 4, 2023

Exploring to know the Web API - 101

 

Here is what I look when I'm exploring to learn an end point of web API for first.  I'm talking of the Web API that uses HTTP protocol.

  1. What is the purpose of this end point?
    • How it helps and to whom?
  2. Who uses this end point?
  3. Identify the host of the end point
  4. Identify the path of the end point
  5. Identify the API's end point
  6. Identify the version of the end point
    • Know the different version that are available for an end point
      • What are active?
      • What are inactive?
      • Who are the consumers using these different versions of this end point?
      • What's the difference between these versions?
  7. Identify the HTTP method of the end point
    • Know the different HTTP methods this end point supports
  8. Identify the resources the end point interact with for CRUD activity
  9. Know the HTTP Request headers it needs, uses and good to have
  10. Know the HTTP Request payload or data and its format
  11. Know the HTTP Response headers and good to have
  12. Know the HTTP Response Status Code and what actually it should be returning
  13. Know the HTTP Response content and format returned
  14. Does this end point enforces any contract with the consumer?
  15. Know the intent of your test on this end point
    • One test in one request is useful unless the context demands otherwise
  16. Is there anything cacheable in the end point's response?
  17. How the request payload is?
  18. How the response payload is?
  19. What is the language in which the end point is written?
  20. What is the language in which the resources that end point updates is written?
  21. Any framework used to build this end point?

On learning this, I continue to next phase in exploring and knowing the web API.



Wednesday, January 26, 2022

Automation Strategy - How to Automate on Web UI Table for the Data Displayed in Table

 

Use Case to Automate and a Problem Statement


I read the below question in The Test Tribe's Discord community server.  As I read, I realized, we all go through it when we think of automating a use case.  The credit of this question is to Avishek Behera.


Picture: Description of a use case to automate and a problem statement

Here is the copy-paste of the use case and problem statement posted by Avishek Behera:

Hello everyone, here is a use case I came across while having a discussion on automating it.

A webpage has a table containing different columns,let's say employees table with I'd, name, salary , date, etc

It also has pagination in UI, we can have 20 rows in one screen or navigate to next 20 records based on how many total records present,it could go about 10 + pages and so on....


Problem statement:

How to validate the data displayed in table are correctly displayed as per column header , also correct values like names, amount etc. Use case is to validate data.

The data comes from an underlying service, different endpoints basically.

Now it's not about automation but about right and faster approach to test this data.

What are different ways can we think of?

I know this is a basic scenario but since I was thinking of different possible solutions.

One way my friend suggested to use selenium, loop through tables ,get values ,assert with expected. Then it is time consuming, is it right approach just to validate data using selenium?


 These are the useful attributes of this question:

  1. It had the preset of context and the context information to the reader
  2. The availability of context information gave an idea of
    • What would API look like
    • The request and type
    • The response and details
    • The consumer of this API
  3. It helped to imagine and visualize how the data would be interpreted by consumers to render
  4. I get to see what Avishek is looking for in the said context


Interpreting the Use Case and Problem Statement


What it is?
  • Looks like the consumer is a web UI interface
    • Mention of Selenium library supports this interpretation
  • The response has a data which is displayed in the table kind of web UI
  • There can be no data to multiple rows of data displayed in the table
  • Pagination is available for this result in the UI
    • Is pagination available in the API request and response or not, this is not sure from the problem description
    • 20 rows are shown on one page of a table
    • The number of pages in the table can be more than one
    • The response will have a filter flag
      • I assume that data displayed in the table can be validated accordingly
    • The response will have the data on the number of result pages
      • This makes the result in a page to be of fixed length
      • That is 20 results on each page and I cannot choose the number on the UI
      • The response will have the offset or a value that tells the number of records displayed or/and returned in the response
  • Is it a GET or POST request?
    • This is not said in the problem description
    • But from the way the problem is described, it looks like a GET request
    • But should I assume that it is an HTTP request?
      • I assume it for now!
  • I assume the data is received in JSON format by the consumer
  • I assume the data responded by the endpoint or the service, are sorted and returned
    • The consumer need not process the response, sort, filter, and display
    • If the consumer has to process the response, then filter, sort and display, 
      • it would be a heavy operation on the client and the client-side automation for this use case

If it is other than an HTTP request, it should not matter much.  The underlying working and representation may remain something similar to HTTP requests and responses unless the data is transferred in binary format.



Automation Strategy for the Use Case


The key question I ask myself here is:
What is the expectation from automating this use case?

How I automate and where I automate is important but it comes later on answering the key question. The key is in knowing:
  • What is the expectation by automating this use case? 
  • What am I going to do from the outcome of this automation? 
  • What if the outcome of the automation gives me False Positive information and feedback?

These questions help me to see:
  • How should I weigh and prioritize the automation of this use case?
  • How should I approach automating this use case to be close to precise and accurate with deterministic attributes?
  • What and whose problem am I solving from automating this use case?

It gives me the lead to the idea of different approaches for automating the same; helps in picking the best for the context

That said, the use case shared by Avishek Behera is not a problem or a challenge with the Selenium library or any other similar libraries.  Also, it is not a problem or a challenge with libraries used in the automation of web requests and responses.



Challenges in the Problem Statement


I do not see any problem in automating the use case.  But there are challenges in approaching the automation of this use case.

On the web UI, if I automate on data returned, filtered, sorted, and displayed, it is a heavy task for automation.  Eventually, this is a very good candidate for soon to be a fragile test.  

Do the below said are the expectation from automation of the use case?
  • To have a fragile test
  • To have high code maintenance for this use case
  • To do high rework in the automation when UI of the web change
  • To complicate the deterministic attribute of this use case automation
If these are not the expectations, then picking an approach that has lower cost and maintenance is a need.

The challenges here are:
  • It is an Automation Strategy and Approaching challenge
  • It is a sampling challenge
    • Yes, automation at its best is as well a sampling, not just the testing
  • It is about having better data, state, and response which helps to have accuracy in the deterministic attributes of automation
    • To know if it is a:
      • true positive
      • false positive
      • true negative
      • false negative
      • an error
      • not processable
  • The layer where we want to automate
    • The layers which we want to use together in automation, and how much
  • Automate to what extent for having information and the confidence -- if this sampling works then most data should work in this context of a system?
  • The availability of test data that helps me to evaluate faster and confidently

Let whatever the system have in the underhood that is GraphQL, gRPC, ReST API, or any other technology stack services, one has to work on -- how to make a request; go through the response, and analyze it in context.  Like testing depends on context, automation as well depends on contextIn fact, context drives testing and automation better when it is included.



My Approach to Automate this Use Case


I will not automate the functional flow of this use case entirely on the web UI.  My thought will be to have those tests which are more reliable and the result influencing and driving the decision.

This thought has nothing to do with the Test Automation Pyramid and its advocacy that is to have the minimal number of UI tests at the UI layer and much more at the integration (or service) layer.  I'm looking for what works best in the context and where to have the tests that give me information and feedback so I have the confidence to decide and act.

To start, I identify the below functional tests for the said use case:
  1. Does the endpoint exist and serve?
  2. Assuming it is HTTP,  I see what HTTP methods this endpoint serves?
  3. What does the endpoint serve when it has no data to return?
    • The different HTTP status code this endpoint is programmed to return and not programmed but still returns
  4. What inputs (data, state, and event) does this endpoint need to return the data?
  5. In what format and how the input is sent in the request?
  6. In what format the response will be returned from the endpoint?
  7. Is the response sorted and filtered by the endpoint?
  8. How does the response look when there is no data available for any key?
  9. What if certain keys and their value are not available in the response?  How does it impact the client when displaying the data in a table?
    • For example,
      • No filter data is returned or it is invalid to a consumer to process
      • No sorted data is returned or it is invalid to a consumer to process
      • No pagination data is returned or it is invalid to a consumer to process
      • The contract mismatch between provider and consumer for data returned
        • What the web UI shows in the table data
      • Any locale or environment-specific data format and its conversion when the client consumes the data that is returned by the endpoint
      • The data when sorted by consumer and provider differs
      • The data is sorted on a state by the endpoint and that might change at any time when being consumed by the consumer
      • Is it a one time response or a lazy loading
        • If it is a lazy response, does the response have the key which tells the number of pages
      • and more cases as we explore ...
  10. and more tests as we explore ...

Should we automate all of these tests?  Maybe no per business needs.  Imagine the complexity it carries when automating all these tests at the UI level.  But there are a few cases that need to be automated at the UI level.  

Then, should we to look at the table rows on different pages to test in this automation?  No!  But we can sample and thereby we try to evaluate with as much as minimal data.  This highlights the importance and usefulness of the Test Data preparation and availabilityWhile, preparing the test data is a skill, the using of minimal test data to sample is also a skill.


API Layer Test


I have a straight case here for first.  That is to evaluate:
  1. The key (table header) and its value are returned as expected
    • Is it filtered? 
    • If yes, is it filtered on key what I want?
    • Is it sorted upon filtering?
    • There is no null or no value for a key that needs to have a value in any case
    • The data count (usually the JSON array object), that is the number of rows
    • The page index and current offset value
    • The number of result pages returned by the endpoint
  2. Can I accomplish this with an API test? 
    • Yes, I can and it will be efficient for the given context
  3. I will have five to ten test data which will help me to know if the data is sorted and filtered
  4. Another test will be to receive more than 10 rows and how these data look on filtered and sorted
    • Especially in case of lazy loading
    • I will try to evaluate the filtering and sorting with minimal data
    • I will have my test data available for the same in the system

UPDATE: I missed this point so adding it as an update.  I'm exploring the Selenium 4 feature where I can use the dev toolbar and monitor the network.  If I can accomplish what I can and it is simple in the context, this will help.


UI Layer Test


I have a straight case here as well to evaluate in the given context:
  1. I assume the provider and consumer abides by the contract
    • If not then this is not an automation problem
    • It is a culture and practice problem to address and fix
  2. I assume the data returned data is sorted on the filter; the web UI just consume it to display
    • If not, I will understand why the client is doing heavy work to filter and sort
      • What makes it to be this way?
    • You see, this is not an automation problem; it is a design challenge that can become a problem to product, not jot just for automation
  3. Asserting the data in the web UI table:
    • I will keep minimal data on the UI to assert that is not more than 4 or 5 rows
    • These rows should have data that tells me the displayed order is sorted and filtered
      • Let's call the above 1 and 2 as one test
    • To evaluate pagination that is number of result pages, I will use the response of API and use the same on the web UI to assert
      • Let's call the above another test that is the second test
      • Again, the test data will be the key here
    • To see if the pagination is interactive and navigatable on UI, I make an action to navigate for page number 'n'
      • If it is lazy loading, I will have to think about how to test table refresh
        • Mostly I will not assert for data
          • In testing the endpoint, 
            • I would have validated for the results returned and its length
        • I will assert for number of rows in the table now
      • Let's call it a third test
  4. I will not do the data validations and its heavy assertions on the web UI unless I have no other way
    • This is not a good approach to pick either
    • One test will try to evaluate just one aspect and I do not club tests into one test

Note: The purpose of the test is not check if the web UI is loading the same rows in all pages.  If this is the purpose, then it will be another test and I will try to keep minimal assertion on the web UI.


The Parallel Learnings


If observed, the outcome of automation and its effectiveness is not just dependent and directly proportional to how we write automation.  It is also dependent on:
  • The design of the system (& product)
  • The environment and maintenance
  • The test data and maintenance
  • The way we sequence the tests to execute in automation
  • Where and how we automate
  • The person and team doing the automation
    • The organization's thought process and vision for testing and automation
    • The organization's expectation from testing and automation
    • How, why, and what the people, organization, and customers understand for testing and automation
  • Time and resources for testing and automation
  • The automation strategy and approach
  • More importantly, the system having and providing
    • Testability
    • Automatability
    • Observability


Note: This is not the only way to approach the automation of this use case.  I shared the one which looks much better to the context.




Monday, January 3, 2022

The Automation Strategy Problem; Not a Appium Challenge

 

In The Test Tribe's forum, I read the post which described the problem as in the below paragraphs and picture.  On looking into it, I learned this can be made as a blog post that tells a strategy for automation.  

Maybe, 10 years back I would have asked the same question.  That's a learning curve.  Today as well, I end up in thinking for a while asking self -- how to test it and how to automate it.  

I want to share how this problem can be looked at from the perspective of testing and automation, and then approach it to automate.

 

Folks. I've two issues on Appium automation which needs your help. 

1. I'm working on a ecommerce website where a payment method is integrated (lets take the example as PhonePe). When i try to place the order in the mobile website with payment method as PhonePe, the payment method app will be opened and I've to complete the payment using it and I'm navigated back to the browser. Issue is - How can i switch context between the mobile browser and the app? I tried using driver.startActivity() but on performing any other actions, it errors out. 

2. Since i need to use the browser to place order and the payment using the payment app, I tried to set up the driver instance with browserName and app as the desired capabilities together. But on running the test, it errors out - browserName and app can't be used together. How can i approach this problem? Anyone who has automated such flows?

Apologies, i'm pretty new to Appium and so, please excuse my ignorance.


Picture: Problem Statement - Description of Scenario & Challenge


Understanding the Scenario and Functional Flow

I observe the below in the said scenario:

  1. It is a website; it also has a mobile website
  2. It has got a payment option integrated
  3. The Appium's Desired Capabilities defined has browserName and the app
    • borwserName -- name of the mobile web browser used in automation; it is an empty string if automating an app
    • app -- the path of an app to be automated
  4. When using a mobile website on a mobile device -- assuming it a mobile web app
    • On selecting a payment app -- assuming it a native app
      • The context changes to payment app UI
      • On completing the payment, the context changes to the mobile website



Challenges Described in the Funtional Flow


I see these as challenges:
  • How to handle this said scenario in automation using Appium?
  • How to switch context between mobile browser and the mobile app?
  • Using driver.startActivity(), it yields an error on performing any other actions
    • On making any actions on UI after using the above said method, the error is observed
      • Reading the description, it is said that the error is thrown when running the automation
        • And, when changing the context back to mobile website from payment app
The driver.startActivity(), takes two arguments -- app's package name and activity to be started.  What's passed for the package name and activity name is not clear from the problem description.  

If the mobile browser is used to launch the mobile website and mimic the action, what is passed as app's package name and activity in driver.startActivity() ? This is not mentioned and unclear to me.

Also what is mentioned for the browserName and app in desired capabilities is not clear.



A Common Use Case


In recent years this is a common use case in a mobile native app having a web view and the websites that have payment transactions.  For example, in the native app when making payment, the web view of payment gateway that shows list of payment choices.  On successful payment, the view switches to native view from web view.



Questions on Reading the Problem Statement:


I have the below questions on reading the problem description:
  1. Why did it throw the errors on any actions post calling the driver.startActivity()
    • driver.startActivity() will start an Android activity using package name and the activity name
  2. The context picked on switching from web view to native and then back to native, is not well picked?
    • But it is a mobile website which means it is opened on a mobile browser, right?
      • No where it is mentioned as a Hybird app i.e. the mobile website installed as an app
    • Does this mobile website maintains its context when switching to a native app (payment app), and then changing the context to (web view) mobile browser?
This takes me to seek clarity for:
  1. Is mobile website a installed Hybrid app? Or, is it a regular website which also has a mobile website and accessed on a mobile browser?
  2. Is it possible to switch the context of web page from mobile web browser to native app, and vice versa?
    • I need to explore it; I'm unsure of it
    • When read the desired capabilities, it looks like this can be done
      • That is context switching of mobile web browser to native app, and back to mobile browser from native app is possible
      • I need to explore on the same to be very sure of it

Code Snippets for Context Switching


Refer to this page for details on using the Web view with Appium.  The below code snippets tell how to find the context of web and native views, and switching to it.
Snippet illustrating the change of context to Web view

Snippet illustrating the change of context to Native view



But, What's Actually the Problem?


If automated as described in problem statement, do we end up in a problem?  I see, yes we will end up in a problem:
  1. Need to maintain our automation to make sure it executes the payment app UI anytime
    • If the UI of the payment app changes, we need maintain the code
  2. Do we have stage environment payment app in this case?
    • If we test the mobile website in the stage and make transaction in production payment app,
      • Can we continue as this in each test iterations?
      • If yes, how long can we continue to use production payment app and pay?
      • Will there by any transaction fee charged each time from payment app?
        • Can this become a financial cost to the business and client or to stakeholders?
        • What other cost should I bear for using this approach?

I need to know:

  1. What is that I want to learn from the use case or scenario on automating it?
  2. What would be the impact if the test did not help me to learn what I want to learn from automating this use case and scenario?
  3. Should I be testing the payment app along with my app? 
    • As I write UI automation to handle the web view of payment gateway and then the native payment app, it becomes part of this test.  Should I do that?
  4. What information, risk and problem discovery I miss, if I do not automate the payment app flows?
    • Is it okay for the business and product, if I miss any information here or if I do not test the flow in payment app?
    • How to arrive at this decision?
The decision here need to be rational.  But, being rational alone may not help always.  Can I be reasonable here when I'm deciding or influencing stakeholders when deciding?



This is a Automation Strategy Problem!


If seen, for first this is not a Appium problem.  It is a problem with -- what to automate, how to automate, when to automate, how much to automate, and why automate.   That is, it is a problem with automation strategy on how to approach and execute it.

To me it is a problem to solve with approaching and execution of automation for payment transaction, and not a automation library usage and implementation problem.  



How can I Approach the Automation Here?


I will learn, should this payment scenario be automated on the UI layer for first?  If yes, why?  And, then I will have the below questions
  1. Can I use the developer APIs of payment service to test and complete the transaction?
    • If yes, then
      • Can I use the stage APIs of payment to simulate the transaction flow and its completion?
        • If I just use APIs, I will not know what's the functional experience of transactions in native payment app.  Is this okay?
  2. I and the product I test, do not have control over the payment system and its apps
    • When I have no control over it at any point in time, should I test it as part of my system?  If I did so, should my product as well include the probabilities and complexities of payment system?
      • Having this information is good!
      • But what can I do with that information?
        • Do I have an authority to change or fix payment system with that information?
        • If yes, good; if no, then the time and resource spent on this s a value return to my stakeholders and their business?
    • It is wise to mention that I'm not including and testing the payment system and its transactions as a part of my system
      • Because my system does not have a control over payment system in any means
  3. If the API that is used for initiating transaction is functional and usable, then I do not have to worry technically from functional perspective of transactions
    • We will have to work on -- if the payment initiating web view is functional on my native app and in my website or a mobile website
      • From here the control of payment and any transaction problem that arises are in the realm of the payment system
  4. In the test report
    • I will include the stage payment API request and its response with data
      • Talking to payment app organization, we may get the developer API access on stage to test our system on their stage
      • Talk to payment app organization!
      • Also we can mock the payment API to an extent and in the test report say this is a mock result
        • If relied on mock, then we can miss the change in payment system
        • I will have the mocking as last approach just to complete a business flow and it will not be my pick unless someone wants to see a business flow completion in a test
    • Have a test that tell about functional and usable aspect of the payment page in -- a mobile website and the payment web view in native/hybrid app


Benefits of this Approach

  1. I and my tests will have clarity what is in my control and what not
  2. When I have control, the test and automation can be well maintained
  3. The flaky areas can be identified; I can come to a decision to eliminate it from the automation or not
  4. It helps to identify what is my problem and what is the problem that I don't own in terms of authority
  5. With this approach, the tests and automation provides clarity when we uncover a risk or problem
While I know the benefits, I must also know the cost of having this approach.


Saturday, May 15, 2021

HTTP2 Migration Tests to consider on AWS's ALB

 

Is your team planning to implement HTTP2 and have services hosted on AWS using ALB?  Then you might have to consider the below tests on priority for start

By default, ALB sends requests to targets using HTTP/1.1.  Refer to the Request Version and Protocol Version at https://amzn.to/3uPaxl1for sending the requests to target using HTTP/2 or gRPC.

Here are few tests to consider on priority:

  1. Do all my client interfaces have migrated to HTTP2?
    • If not, having ALB on HTTP2 and few clients on HTTP/1.1 will make ALB use HTTP/1.1
  2. What time it takes to respond if I'm on HTTP2?
  3. What is the traffic pattern i.e. how the frames (frequency & size) delivered to the endpoint on HTT2?
  4. What happens to the client if there is a delay in response?
    • If serving users who have low bandwidth or intermittent networks, what should be the product experience to them?
    • What if you are queuing the requests and it is by the design in the system?
  5. What's happening at the endpoint?
    • What happens when there is a POST and PUT from the client as multi-part and not as multi-part?
    • When a bunch of requests is fired at once what's the size of each batch and frame in it?
    • Does the client still adhere to the HTTP/1.1 style of dealing with requests and response, or upgraded to deal the HTTP2 way?
    • How the timeout is handled on the server & responded to the client it handles?
  6. Do the libraries I use at the client and server end support HTTP2?


Monday, September 28, 2020

Not a Senseful Question: Is API Testing important than UI Testing?


It is usual for a tester/SDET to come across the question and discussion around API Testing versus UI Testing, asking which is better or more important. More questions which one might cross on the same topic are as below:


  • Which testing helps the tester to find more bugs and faster -- API Testing or UI Testing? 
  • Which testing is quicker and effective -- API Testing or UI Testing?
  • Which testing is less flaky -- API Testing or UI Testing?
  • Which testing helps to find risks and problems early in the development cycle -- API Testing or UI Testing?
  • Which testing has less dependency and maintenance comparatively -- API Testing or UI Testing?
  • Which testing provides quicker feedback on the health of product -- API Testing or UI Testing?
  • Which testing needs little efforts and investment to build, execute and maintain -- API Testing or UI Testing?
  • A tester can start testing with API before the UI is available, so which testing is effective -- API Testing or UI Testing?
  • If API Testing is covered no need for UI Testing?
  • Which tests cases while programming is in progress or yet to start -- API Testing or UI Testing?
  • The APIs can be used to test web application service logic, so which testing is effective -- API Testing or UI Testing?
  • UI affects the UX, so UI Testing is required; don't the API affect the UX? If both affect the UX, which testing is of priority and more influential?
  • The testing of a system for data using API is possible before the UI is available, so which testing is effective -- API Testing or UI Testing?
  • One can say it is functional just by using API with no UI, so which testing is effective -- API Testing or UI Testing?
  • API Testing can reveal more details of services and web application server, which UI testing cannot with or without load balancers, so which testing is effective -- API Testing or UI Testing?


I hope that was not a big list of questions, and this list keeps growing if I share what I have come across as part of the reasoning. Probably, you might as well have witnessed such questions when discussing the API Testing over UI Testing.



The word "Testing" is vague!


I learn the word "Testing" is vague when used with no precise context and expectation. For example, "API Testing" does tell that subject of discussion and work is around API in one's testing. Also, it does not communicate precisely -- what in the testing of APIs. The same is for the testing of UI.


In most of our discussion, we see such words loosely said in conversation. As a result, it may lead to ineffective thoughts. For example, "Is API Testing important than UI Testing?". To me, this question does not tell -- What to provide as a response, so one can think about it and decide? On what basis the comparison is made for the effectiveness and in which context?


We can test API and UI for quality criteria as -- Functional, Performance, UX, Security, Accessibility, Simplicity, Usability, and more. Yes, we can test the APIs for UX.


How can one reduce the vagueness when the words become too casual?  

When used the word "testing" providing the context of expectation, it can reduce the vagueness in communication. For example, "Testing for the API functionality is important than the functionality of UI?" Now you see the difference and clarity with this question in comparison to -- "Is API Testing important than UI Testing?"



API and UI are individual operating entities in a System


To benefit the reader here, let me take Mercedes-Benz's car assembly line illustration. What can relate to APIs and UI here? Can the exterior of the car be seen as UI and internal as multiple APIs which take the input and respond for movement of the car?


So the question here is:

  • Testing of functionality just for the internals of a car is enough?
  • By doing so, there is no need to test for the exterior (UI) of the car?
  • If yes, why would skilled people work in designing and testing the UI that is exterior of a car, constructing various scenarios?
  • Testing for the security of a car internal is sufficient? Should the UI of also of a car be subjected to security tests? Ask the same for performance, functionality, and UX.


I have heard there OTA updates for the software installed in a car. This software can control the internal and external of a car is what I see.


I learn the API and UI are two individual entities of a system. Testing of one entity will not overrule the need for testing of other entities. Instead, it can complement and help testing better. If thinking testing of API is enough and no need to test UI, then asking what tests look repetitive is logical.


Why would people invest so much money, time, and resources in building the interactive UI? Why the world faces the problem with fragmentation in each segment? Having the engineering focus on the API segment alone is enough?


Why we usually have two teams working on APIs and front-end?  Why each of these teams are specialized and skilled in working at their areas and not in both?  If testing for API is sufficient, why there is a dedicated team for programming the front-end and its testing?  Answering these questions can arrive one at having a clarity.



Should the client be seen as UI?


What is UI in this context? The client end of the system will have code for:

  • UI
  • Rendering of UI based on context
  • Processing the input
  • Processing the output
  • Processing and interacting with peripherals and its event
  • And much more


To imagine the client's complexity, look at the exterior of your mobile phone device and how it interacts with internals for the input you provide. Think about the Push Notification you receive via an API and how it appears as UI in the notification tray. How acting upon the push notification in UI will respond? It looks simple case but not so when it comes to the handling of UI.


Seeing the client as UI and an endpoint to input the data and display is reasonable. But confining only to that will blind one. What was the need for the Developer Tool in a browser, if the API testing was sufficient? Every piece i.e. each entity in the software system is complex. Testing of one will not override the need for the other entity.



UI fails as hard as the API


I have been working on the server end and as well client end. I have experienced API functioning well but the client fails though it abides by contract. Should the client be seen as just UI?


The attributes of API will connect and communicate with the attributes of UI. If the API is functional and it does what is expected does not mean the client will be functional. The API is secure does not mean the client is secure. The API (service) is doing better in performance does not mean the client is also performing well. The UX can be annoying by the API while the UX of the client is well received. The context in which API is tested in isolation is different from the testing of a client.


To share one experience of me, the user entered the insurance details as in the insurance card. The API internally did handle the details differently in the presentation. As a result, the next time when API responded with insurance detail, the client crashed. It was the same data but presented in a different way which the client did not accept or understand. But when tested the API in isolation, API accepted the data in both formats. Then the business wanted the detail as expected by the client and not how the API presented it. Could the testing of API alone help me to learn the problem at the client end here? The API was vulnerable but not the client in one case. I used the API as an entry point for the attack. Eventually, the impact was reflected in UI with the response.


I have more such examples from my experiences, to say API and UI are two different entities. Testing for one in any quality criteria will not override the need for testing of the other. But it helps me to plan and design my tests well knowing them individually and how both interact.


How to respond for -- API Testing or UI Testing?


  • Both are distinct, important, and needed
  • Testing of each provide unique information and value
  • Ask details for what to test and what is expected out of testing
  • Rephrase the question asked to better one which is explicit with clarity
  • Myths:
    • Testing for API is important than testing for client
    • Testing for API is sufficient and no need to test for client
    • Testing for API will cover data which client do not cover
  • As the automation pyramid talks of Unit, Integration (API) and UI layers, it is confused the client is UI layer.  UI is one part of the client and not the whole.
  • Testing for API finds unique information which may or may not be found by testing client
  • Identify what to test in these two distinct layers
  • Both of these layers have their own state and behaviorial differences
    • So trying to mock with stubs on other layer be it from API or from client, may always have something critical unlearned
    • When testing in this way, usually the focus is one element of a software and so it is called Unit Testing
  • Testing for the APIs and testing for the client have distinct weight and importance