Showing posts with label Test Strategy. Show all posts
Showing posts with label Test Strategy. Show all posts

Thursday, November 7, 2024

Functional Testing Is Must In Performance And Security Testing

 

I'm sharing about how I missed to test for functionality while I was immerse focused on testing for performance of a Stored Procedure.  I was unhappy for a couple of days as I missed something that I practiced for years.  

I'm glad for reinforcing this learning with much more awareness into my testing's MVT and MVQT, now.


Context of Testing


A Stored Procedure was optimized for better execution time.  No change in the functionality.  This part of the system is not touched for a long time (years?).  There was no change in functionality here for long time (years?).  The time taken by SP was of concern.  I was asked to test for the optimization.

The complicated area, here, is the test data to use.  It took me days, for identifying and building the test data to test this optimization by mimicking the production incidents, use cases, and data.

When I got the test data ready, it was the fourth day of my testing this change.



Where Did I Go Blind By Being Focused?


The test data that I prepared is solely for the evaluation of the execution time.  This test data helped to test functionality as well.  But, my focus was on evaluating performance not functionality from this test data.

The change in SP did impact the functionality.  I was supposed to use the large data range to test for functionality of this feature which includes two SPs.  But, the task assigned was to test just one SP which is optimized.  I got blind here!  

Are you asking, what is the impact of this functional problem?
  • In the one complete business work flow, this functional problem added the same data into different sets in the subsequent iterations.  Redundant Data -- This is not an expected behavior.

I just spoke performance, traces, data I/O and execution time, because that was a pressing problem.  Why?  That was the objective given to me.  

My testing mission fell short in redefining this objective.  If I had redefined it, I would, have added functionality in the better scale.

If I had redefined it, I would have pulled the other SP into functional testing which is also part of this feature's work flow.  These two SPs are expected to handle the data by eliminating the redundancy.

It was a simple test, but, I did not include/had that in my testing mission that day.



Why Did I Go Blind?


The performance test blinded me for functionality, as I saw the basic functional flow looked functioning.  But, the data count was going wrong when a bigger data range is used in the context.  

See here, how stupid I was in my testing!  I'm testing for a SP that has a change as part of its optimization for execution time.  I never brought the functional testing in.  Why?  I focused on the testing objective.

I just looked into one SP that is optimized.  I did not look the other SP which has to work along with this SP later to complete functional flow of the feature.  Why?  How is that even possible?  I was asking myself this.  I see, this is okay from the perspective of the testing objective I had.  But, not okay from the perspective of a test engineer who is supposed to think the impact and prevent the problems.

My immersed and concentrated focus on performance and its related activities on a SP for four long days did not let me see this.  



What Am I Saying Here?


While I have tested for DBs and ETL systems for years, I did not use my learning here.  What is that learning?
When there is a change in any part of the ETL, SP or DB of a system, testing for the functionality for the business workflow is equally important.  Vary the data dimensions and evaluate the counts.

I was completely hooked into the execution time and the test data while switching between the environments for four days.  The chaos in data between environments is something that misleads easily.  I fell to it this time.

I say to myself, if it is a fix for the performance optimization or a security [or any quality criteria], testing for functionality is equally important and of priority as running the tests for performance or security.

When a DB layer is picked for fixing and optimization, testing for functionality in a equal scale is must.  There is a change in the code or/and infrastructure and it has to be noted with additional attention.

To add on this, this time, I did not go through and analyze the SP.  I took this call from the test team.  This call of me costed and had a major part in letting me not to think of functionality.

My fellow colleague ran a test with varying data size by completing the business workflow and observed the problem, and informed me.  I give the credit to Sandeep.

If I had brought this performance test under the automation, I would not have done this.  Why?  I will evaluate and assert for each data returned for different sizes.  I did not automate here and there was no need for it in this context.

Redefine the testing objective that you have got; it helps when you see the model of a system and test.


Respect all the fix and suspect all the fix.  This helps in a longer run!  



Saturday, February 3, 2024

Performance Testing - The Unusual Ignorance in Practice & Culture

 

I'm continuing to share my experiences and learning for100 Days of Skilled Testing series.  I want to keep it short and as a mini blog posts.  If you see, the detailed insights and conversations needed, let us get in touch.


The ninth question from season two of  100 Days of Skilled Testing is

What are some common mistakes you see people making while doing performance testing?  How do they avoid it?


Mistakes or Ignorance?

It is mistake when I do an action though I'm aware that it is not right in the context.

I do not want to label what I share in this blog post as mistake.  But, I call it as ignorance despite having or not having the awareness, and the experience.

The ignorance said here is not just tied to the SDLC.  It is also tied to the organization's practice and culture that can create problems.

To this blog post's context, I categorize the ignorance in these categories -- Practitioner and Organization.

  1. Practitioner's ignorance
    • Not understanding the performance, performance engineering, and performance testing
      • When said performance testing, taking it as - "It is load testing"
      • No awareness on what is performance and performance engineering
        • Going to the tools immediately to solve the problem while not knowing what is the performance problem statement
      • Be it web, API, mobile or anything,
        • Going to one tool or tools and running tests
      • No much thinking on how to design the tests in the performance testing being done
      • Ignoring Math and Statistics, and its importance in Performance analysis
      • No idea on the system's architecture, and how it works
        • Why it is the way it is?
      • The idea of end-to-end is extended and used in testing for performance and having hard time to understand and interpret the performance data
        • How many end-to-end your tests have identified?
        • Can we test for performance to all these identified and unidentified end-to-end?
      • Relying on the resource/content in internet and applying or using it in one's context without understanding it
      • No idea on the tech stack and how to utilize the testability offered by it in evaluating the performance
      • Not using or asking for testability
      • Getting hung to most spoken and discussed 2 or 3 tools on the internet
      • Applying tools and calling out it as performance testing
      • No attempting to understand the infrastructure and resources
        • How it impacts and influences the performance evaluation and its data
      • Idea on Saturation of resources
        • Thinking it as a problem
        • Thinking it as not a problem
      • Not working to identify where will be the next bottleneck when solving a current bottleneck
      • What to measure?
      • How to measure?
      • When to measure?
      • What to look when measuring?
      • Not understanding the OS, Hardware resources, Tech Stacks, Libraries, Frameworks, Programming Language, CPU & Cores, Network, Orchestration, and more
      • Not knowing the tool and what it offers
        • I learn the tool everyday; today, it is not the same to me compared to yesterday
          • I discover something new that I was not aware of what it offered and exist
          • I learn the new ways of using the tool in different approaches
      • No story in the report with information/image that is self-describable to most who reads it
      • And, more; but the above said resonates with most of us
  2. Organization's ignorance
    • At the org level, for first and to start, it is ignorance in Performance Engineering
      • Ignoring the practice of performance engineering in what is built and deployed
      • Thinking and advocating, increasing the hardware resources will increase and better the performance
        • In fact, it will deteriorate over a period of time no matter how much the resources are scaled up and added
      • Ignoring the performance evaluation and its presence in CI-CD pipeline
      • The performance tests on CI-CD pipeline should not take beyond few minutes
        • What is that "few minutes"?
      • Not prioritizing the importance of having the requirements for Performance Engineering

Recently, I was asked a question - How to evaluate the login performance of a mobile app using a tool "x"?

In another case, I see, a controller having all HTTP requests made when using web browser. Running these requests and trying to learn the numbers using a tool.


I do not say this is wrong way of doing.  That is a start.

But, we should NOT stay here thinking this is a performance engineering and that is how to run tests for learning a performance aspect[s].


To end, the performance is not just - how [why, when, what, where] fast or slow?  If that is your definition, you are not wrong!  That is a start and good for start; but, do not stick on to it alone and call performance.   It is capability.  It is about getting what I want in the way I have been promised and I expect; this is contextual, subjective and relative.  The capability leads to an experience.  What is that experience experienced?

Sometimes, serving the requests by what you call as slow, is a performance.  What is slow, here?

The words fast and slow are subjective, contextual and relative.  It is one small part of performance engineering.

That said, let me know, what have you been ignoring and unaware in practice of Performance Engineering & Testing?


Friday, February 2, 2024

Deep Link and its Testing via Automation

 

I get these question consistently from my fellow testers and community.

  1. How to automate the mobile apps and web applications using Deep Links?
  2. How to automate the business flows using Deep Links?
  3. How to achieve end-to-end business flows testing on using Deep Links?
  4. How to automate scenarios in mobile apps using Deep Links?
  5. What is the best approach to automate the mobile apps using Deep Links?
  6. What is the best practice to automate using the Deep Links?
And, more questions on same pitch.


No Deep Dive into - What is Deep Link?


A hyperlink in HTML is a kind of deep link within a website or to another website.

Deep Link is known with different names for web, Android app and iOS app.  All these names have the same understanding and intent at some point.

The Deep Links are URIs that takes me directly to a specific part (activity or fragment) of the app that I'm using or testing.  The Deep Link will have an intent which tells where I will be taken on using it.

When we converse on diving deep technically into testing and automation of Deep Link, will share more insights into its internals.



Deep Link and Challenges


This question is discussed with me often:
How to do end-to-end testing using the Deep Link?
Automation of a mobile app using Deep Link poses a challenge which is not experienced in web application.  

One such challenge is, say you have not installed the mobile app.  [This is solvable!]
  • On using a Deep Link, I should be taken to Apple Store or Play Store based on the app.
  • I have to install the app.
    • Post this, in the traditional automation, I should start traversing the business work flows via GUI.
    • Is this adding to the flakiness aspect of automation via GUI?

When we talk so much about flakiness and how to avoid (not prevent), should we exercise business workflows when automating using Deep Link?  What you are thinking?  Let me know!



Scoping of Automation Using Deep Link


Back to the fundamentals.
  • We have to automate, no escape from it.  Let us automate what must be automated!
  • Let us not fall into trap of "Automate everything!"
    • For today, I'm in this mindset and attitude,
  • What we automate depends on the objective or goal that we want to accomplish.
    • Each test should have precise and deterministic goal.
      • A test via automation is not an exemption to it.
      • A test defined in automation should be precise, deterministic and have a single objective - Single Responsibility Principle.

What is the objective of my testing via automation for the Deep Link?  This define the scope and extent of my automation.  This will minimize the number of checks that I do using Deep Link.

The purpose of Deep Link is to take me to specific part of the mobile app.
  • Should I start the end-to-end or exercising the workflow to be included in the Deep Link tests?
    • If included, am I not complicating the testing via automation?



Automation using Deep Link

I ask this question to myself and to my team.
What is the goal of testing via automation using Deep Link?

This question helps me to pick minimal and necessity flow actions.   It has lead and leads me to define minimal tests for Deep Link based on what we want to learn from automation of same.

To me, the purpose of Deep Link is not end-to-end testing.  It's purpose is,

Am I taken to the intended state and data when used the Deep Link?

I have kept the test intent to this.

With this, I have come with tests that has minimal must evaluation and assertion to learn if the app is responding or not to the Deep Link.  This is what the business wants when the Deep Links are created.

The app usage and workflow function is not a problem statement of Deep Link in a general context.

Deep Link is not for end-to-end.  It is to take to you from a point to another point, that's it.


Are you automating using Deep Link?



Monday, January 22, 2024

RAAMA: My Test Discovery Model

 

RAAMA -- I Look at You Everyday!


I have tried to put up one of my Test Discovery models in a conceptual way here with name RAAMA - Refer to, Arrange, Action, Monitor, and Assert.

Maybe this model helps you and your test engineering team as it is helping me.  Use this to your context with addition or subtraction for what you are seeking.

I refer to this RAAMA of me everyday and when I'm testing.  I'm finding the new learning and realization everyday that I was unaware earlier.  My understanding of RAAMA is not same what I had on the previous day.

My understanding of this RAAMA is incomplete and I have made PeACE with it by accepting it.  My understanding is growing and getting better everyday.  I will share a better version of it as I experience it.

Each time I look up to RAAMA and refer to it, I see a new dimension to RAAMA.  The awareness, exposure, and the questions are getting better giving the better realization of what I was ignorant and unaware.  The RAAMA is exposing me to be a better test engineer today than what I was earlier.



RAAMA - I Look at You Everyday!





RAAMA - One of my evolving models for Test Discovery


Note: I have not explained in detail what I mean for each node and its sub-nodes.  I can talk and discuss it with you if you look for it; I'm just one email away to get started.



Sunday, November 19, 2023

Waterfall or Agile: Testing for Performance - Where to Start?

 

Do you understand the Agile?  I have shared my understanding here; give it a read.

The eighth question from season two of 100 Days of Skilled Testing is:

Can you share some best practices for conducting performance tests within an Agile development environment?


Best Practices and the Agile


The irony is, the Agile says, there is no best practice.  It asks, to tailor and fit the practice to the context so the continuous delivery and value is delivered consistently upholding the Agile's principles.  

Yet, we talk about the best practices in the Agile's context, like the eighth question asked here.

What is the effective way to test in the continuous delivery?

As a test engineer, how can I start thinking and testing for performance from the inception of a feature's thought?  I see, it is not hard to do so.  As you read further in this post, you will have a perspective and awareness to do it.


Performance in Waterfall and Agile

I learn, the performance is an experience.  It does not differ because of the Waterfall or Agile.  If the performance is not a pleasing experience, it will impact stakeholders no matter it is Waterfall or Agile.

But, the question when evaluating for the performance is -- where to start, when to start, how to start, and with what to start?

As of today, I do not see differences in the mindset and skills one has to have for testing of performance in Waterfall and Agile.  Could be the approach differs in certain phases here; otherwise, I see the same in both practices.

I will rephrase the eighth question to this:
What is your practice to evaluate the performance right from the start of product development in your project?
I do not want to wait until to hear -- the development is completed and deployed; now we can start running the performance tests.

What can I do as part of performance tests from the first day of development and first commit?  This is my intent and area to look in strategizing the testing and tests.



The Culture of Engineering

At the start and end of the day, when we developers start and finishes the work,

  • How the work is done and why, is defined by the engineering culture practiced by that organization.
    • The Performance Engineering of the software products and solution being built will be driven the by the culture practiced.

The Test Engineering and how we test and automate will be driven by the culture of engineering practiced in the organization.

Writing the code not just for building the functionality, but, also for performance is a culture driven factor.  The organization's culture for engineering practice drives it!



Testing for Performance - Where to Start?


I'm sharing my research work that I'm doing and experimenting on performance engineering and performance tests.  I'm seeing the results and value out of it and so are the stakeholders.

Today, we are getting skilled in exploring and testing without the requirement document and SLAs in hand.  Isn't it?  Haven't you?

I use my MVPT to figure out what are the minimum performance tests for the feature.  As part of this, I will explore with help available aids to evaluate the performance.

To start, I will use these questions to figure out the performance tests:
  • What is the minimum viable questioning performance tests that you have got to test this feature?
  • What is the minimum viable questioning performance tests that you have got to test this workflow?


Unit Tests for Time and Space Complexity


I will work closely with programmers to gather information on below when the code for the feature is committed as part of Unit Tests.
  • The execution time taken by the code of that feature - the Big O Notations for space and time complexity
    • Usually the Unit Tests focuses on functional tests and clean code practice
    • But, when we test team ask and push for performance data, this can come as part of Unit Tests
      • An architect or a principal engineer can set an expectation on
        • What should be the time and space complexity of a code for a feature?
          • Each functions and blocks need to be evaluated on this
          • As said earlier, this depends on a engineering practice culture of an organization
            • If the culture wants it, it will be there; else just the functional code will be delivered and not the performance code
      • If the time and space complexity analysis outcome is not as expected, the code written has to rethought and refactored
        • The review process need to put it back
        • The comment with data has to be published
          • This will be useful to model the performance tests by test engineers who will be working on performance tests
      • Doesn't it look a like a effective useful practice as part of Performance Engineering right in the early stage?
        • This is very well applicable to projects running on Agile or Waterfall

Do you have this in your project and Unit Tests written?

The time and space complexity questions should not be confined just to the SDETs [test engineer] interview.  A test engineer has to ask for it and apply it in her or his day-to-day work.


Profiling Tests by Test Engineers


We testers do not get into product's code analysis.  We have to build skill to run the profiling on product's code and analyze the resources data.
  • Test Engineers can test the feature's code with the help of IDE's profiling (runtime analysis) and collect the performance data by identifying the performance bottlenecks
    • This runtime analysis can profile for
      • Memory snapshots
      • Thread analysis
      • Monitoring resources
      • CPU and allocation profiling
      • And, more
      • The problems and risks can be reported upon analysis
    • Compare the two different solution's approach performance data
This information will tell and indicate where is the risk and problem when we deploy the code.  In my opinion, this is a useful information in modeling further performance tests.  This information is first-hand information which is very powerful before we start using any other performance testing strategies and tools to aid the tests.



Get Started with Performance Engineering and Tests


These are available in the IDE.  We think of performance testing tools and ask how to test for performance.  To be precise, we test developers (test engineers) should change our mind and shift for first.  If not, as I say, we will be the bottleneck for first to ourselves.  Did you know this way of testing for performance?  Why not you introduce this in your project and organization?

If seen, these test practices can be used right from the day we commit the feature's code. This is a place to start for the performance tests.   This will be a differentiator together with MVPT and guides the MVPT to design effective performance tests in the context.

I do not say these are best practices and there is no best practices.  But this is a useful practice when the organization and stakeholders ask for it.  Let your organization and stakeholders know how well you can test for performance right from first commit of product's code.

To stop and end here,
  1. Just do not test for functionality from day one, also test for the performance from the day one.
  2. Influence your organization's engineering culture and developers not just for developing functional code, but, also for the performance code




Tuesday, March 8, 2022

Solving: Question on Automating a Finance Web Application

 

Requirement for Web App Test Automation Framework


I read this question on The Test Tribe's Discord community server.




Here is the copy-paste of the question:

Hi all,

Could I have your expertise on below.

I have a requirement to build a test automation framework for a Web Application. Web application is developed using java and MySQL.

And the domain is related with finance sector. 

Most of the functions in the application is working with scheduling services which the service will triggered automatically. Mainly they wanted to verify the data which are displayed in UI with the database. Which means the data which is created with the scheduling services are added correctly. 

Some of the testing scenarios are as below:

1. For each transaction it will create a ticket and need verify the data which is on the ticket.

2. There are some configurations ( like rules) which will used to create behaviors of the transactions and have to verify the transactions will work according to the rule.

3. Whether the ticket is created for the correct user.

4. There are notification rules as SMS, Email and Letters, and needed to verify whether the correct notification rule is triggered. 

5. Verify if the ticket is automatically closed if outstanding or arrears is fully settled.

For automate this kind of scenarios what is best testing methodology ( tools, BDD vs TDD, framework type) should I use for create the framework and writing the test scripts.

Hope you got the above.

Could you please share your thoughts on this. It will be grateful. 

Thank you.



The Common Large Question


Listen to any Test Engineer who wants to start the automation, more or the same this will be the voice and words.  I was here and at times I will land up here when I have to pick the tooling part.  How I drill through this landscape and accomplish the mission will define who am I.  

This is the strategic part of testing and automation.  If this is firm to support the context, it can bring the values.  Else, I keep adding the costs to myself, the project, and the business.  The execution of the strategy will happen with what we pick; but, did it serve what we expect?

I want to demonstrate and share strategic thinking and apply it to arrive at an approaching and execution plan.  Doing this best to the context, will leave me with minimal costs to handle and survive with.  At least, I believe it so.  Just to make sure you are with me till here, do you hear such a similar question from Test Engineers who want to automate or start the automation practice?  

To me, this is one such common large question that I come across when I listen to testers and me.  I say it large for this reason -- the number of times I hear this question is beyond the count I can keep in my practice.



Interpretation of the Need and Requirement


As I read the requirement written and the follow-up discussion on same in the TTT's Discord space, I learn:
  • The present execution state of automation
    • Looks like no automation is written for this web application
  • What's the product?
    • It is a finance application
      • The web UI is the user interface for this application
    • Is there any other user interface for this application?
      • Not sure
  • What does the product do?
    • As said in the requirement,
      • The majority of business use case and function flow is to do with scheduling service
      • This scheduling happens automatically that is the system takes care of it using the input from the user -- which can be seen as one transaction
      • There are rules which can be called configurations
        • These rules will define how and what the transaction must do
        • Need to validate if the transaction adheres to the set rule
        • Need to validate if the ticket is created to the right user using this rule
        • Need to validate if the notification rule executed is right
          • The different types of notifications available
            • SMS
            • Email
            • Letters
              • Postal letter?
        • Each transaction will create a new ticket
      • Validate if the ticket is closed by the system on outstanding credit or arrears is completely settled
  • Technology and Tech Stack
    • Java and MySQL is used in this web application
    • Could be, 
      • The backend is written in Java
      • The database uses MySQL
    • But the system as this that involves scheduling and notification will have queuing, caching, messaging, and searching operations
      • The tech stack used for this is not mentioned in the requirement
      • Imagine a system that has to initiate the postal letter notification
        • This is something which involves not just software but also the people interaction very much
  • Help Expected from Community
    • As written in  the question posted
      • "For automate this kind of scenarios what is best testing methodology ( tools, BDD vs TDD, framework type) should I use for create the framework and writing the test scripts."



Microservices and Where to Automate


I guess this product should have been built using Microservices.  When I say Microservices, I mean, a small, modular, replaceable, independently developed, and independently deployed software application that is responsible for performing one function within a large system.

The diagram of microservice system

An illustrative representation of Microservices system


When I have to pick the automation in the large system built using Microservices, I will know what is that I want to accomplish from automation?  And, this is one of the primary questions to be answered irrespective of what the system is and how it is built.

Further, this leads me to know and understand:

  1. What data do I need to test and validate?
  2. Where to validate data?
  3. How to validate data?
  4. Which is faster and more efficient when validating the data?
  5. What is the expectation when validating the data?
This does not rule out or hold up any layer of the system.  In fact, this helps to know the importance, priority, and criticality of the layer (seam) where I have to start or have my tests concentrated.

I have missed showing the "data" within the hosting area in the above picture.  With that, I ask where should I test and automate to look for what I want to learn.  When I have this picture of my system in mind and how the microservices interact with the tech instances, it directs me to talk with appropriate team members and get started.


Getting Started to Automate in this System

Reading the problem statement and what is expected, it is nowhere mentioned by the tester -- how to automate on the web UI or API or other tech instances of the product.  But, the tester has mentioned the flows and their outcome which should be covered in the automation.

If I had to start testing and automation, I will do the below and assess:

  • APIs:
    1. I look at how good and deterministic is the APIs in processing the requests
    2. Can I take the response of API to infer the system looks to function as expected
    3. If yes, just this has to be taken by the client interface and display the result?
    4. Can I test the notifications at this layer with help of automation?
    5. Can I test the rules at this layer with help of automation?
    6. Can I test for the configuration here?
    7. Can I test for the ticket creation and its state updates?
  • Web UI:
    1. Do the contract between the web UI and API is obliged or does it have any problem?
    2. Is there any JS feature on web UI that takes and processes data further?
      • If yes, do I need to validate this to understand the correctness degree of the data to decide?
    3. Or, the web UI just accepts the response and display the data?
  • Databases, Queuing, Messaging, and Caching:
    1. Is there a need that I should be testing and automating at this layer to decide on the data displayed?
    2. Should I test here in association with other tech instances or in isolation to know the data?
    3. Is testing of Microservice and its endpoint is sufficient?  Or, should I also include these tech instances in testing together with Microservice?

This will set me on the path now to decide where to start automation so that it assists and speed up my testing.  If the Web UI consumes the data from the endpoint and display, I will begin automation at the API layer.  This is simpler, faster, and can help me to figure out the problems distinctly at web UI and API layer.  

If I started automation at the web UI layer to determine the correctness degree of data, maybe I will have to spend more time to automate in this context.  If I have the freedom to go ahead, I will update the team and stakeholders on why I'm going ahead with this decision.  Or, if I have to present it to stakeholders and then start, I will present the work with data i.e. time, what is covered, and how much along with what is not covered and why.

That said, I'm not saying -- I do not automate at the web UI layer.  I'm saying, where will I start and why.  Making sure one part of the system is deterministic will help to figure out the problems in the other parts of the system.  The question from a fellow test engineer says, a requirement of automation for web application and not automating the application on web layer.

The tools and methodology come with culture and practice followed in the project.  At least the test engineer will have choices on the tools and library side though not at methodology.



Wednesday, January 26, 2022

Automation Strategy - How to Automate on Web UI Table for the Data Displayed in Table

 

Use Case to Automate and a Problem Statement


I read the below question in The Test Tribe's Discord community server.  As I read, I realized, we all go through it when we think of automating a use case.  The credit of this question is to Avishek Behera.


Picture: Description of a use case to automate and a problem statement

Here is the copy-paste of the use case and problem statement posted by Avishek Behera:

Hello everyone, here is a use case I came across while having a discussion on automating it.

A webpage has a table containing different columns,let's say employees table with I'd, name, salary , date, etc

It also has pagination in UI, we can have 20 rows in one screen or navigate to next 20 records based on how many total records present,it could go about 10 + pages and so on....


Problem statement:

How to validate the data displayed in table are correctly displayed as per column header , also correct values like names, amount etc. Use case is to validate data.

The data comes from an underlying service, different endpoints basically.

Now it's not about automation but about right and faster approach to test this data.

What are different ways can we think of?

I know this is a basic scenario but since I was thinking of different possible solutions.

One way my friend suggested to use selenium, loop through tables ,get values ,assert with expected. Then it is time consuming, is it right approach just to validate data using selenium?


 These are the useful attributes of this question:

  1. It had the preset of context and the context information to the reader
  2. The availability of context information gave an idea of
    • What would API look like
    • The request and type
    • The response and details
    • The consumer of this API
  3. It helped to imagine and visualize how the data would be interpreted by consumers to render
  4. I get to see what Avishek is looking for in the said context


Interpreting the Use Case and Problem Statement


What it is?
  • Looks like the consumer is a web UI interface
    • Mention of Selenium library supports this interpretation
  • The response has a data which is displayed in the table kind of web UI
  • There can be no data to multiple rows of data displayed in the table
  • Pagination is available for this result in the UI
    • Is pagination available in the API request and response or not, this is not sure from the problem description
    • 20 rows are shown on one page of a table
    • The number of pages in the table can be more than one
    • The response will have a filter flag
      • I assume that data displayed in the table can be validated accordingly
    • The response will have the data on the number of result pages
      • This makes the result in a page to be of fixed length
      • That is 20 results on each page and I cannot choose the number on the UI
      • The response will have the offset or a value that tells the number of records displayed or/and returned in the response
  • Is it a GET or POST request?
    • This is not said in the problem description
    • But from the way the problem is described, it looks like a GET request
    • But should I assume that it is an HTTP request?
      • I assume it for now!
  • I assume the data is received in JSON format by the consumer
  • I assume the data responded by the endpoint or the service, are sorted and returned
    • The consumer need not process the response, sort, filter, and display
    • If the consumer has to process the response, then filter, sort and display, 
      • it would be a heavy operation on the client and the client-side automation for this use case

If it is other than an HTTP request, it should not matter much.  The underlying working and representation may remain something similar to HTTP requests and responses unless the data is transferred in binary format.



Automation Strategy for the Use Case


The key question I ask myself here is:
What is the expectation from automating this use case?

How I automate and where I automate is important but it comes later on answering the key question. The key is in knowing:
  • What is the expectation by automating this use case? 
  • What am I going to do from the outcome of this automation? 
  • What if the outcome of the automation gives me False Positive information and feedback?

These questions help me to see:
  • How should I weigh and prioritize the automation of this use case?
  • How should I approach automating this use case to be close to precise and accurate with deterministic attributes?
  • What and whose problem am I solving from automating this use case?

It gives me the lead to the idea of different approaches for automating the same; helps in picking the best for the context

That said, the use case shared by Avishek Behera is not a problem or a challenge with the Selenium library or any other similar libraries.  Also, it is not a problem or a challenge with libraries used in the automation of web requests and responses.



Challenges in the Problem Statement


I do not see any problem in automating the use case.  But there are challenges in approaching the automation of this use case.

On the web UI, if I automate on data returned, filtered, sorted, and displayed, it is a heavy task for automation.  Eventually, this is a very good candidate for soon to be a fragile test.  

Do the below said are the expectation from automation of the use case?
  • To have a fragile test
  • To have high code maintenance for this use case
  • To do high rework in the automation when UI of the web change
  • To complicate the deterministic attribute of this use case automation
If these are not the expectations, then picking an approach that has lower cost and maintenance is a need.

The challenges here are:
  • It is an Automation Strategy and Approaching challenge
  • It is a sampling challenge
    • Yes, automation at its best is as well a sampling, not just the testing
  • It is about having better data, state, and response which helps to have accuracy in the deterministic attributes of automation
    • To know if it is a:
      • true positive
      • false positive
      • true negative
      • false negative
      • an error
      • not processable
  • The layer where we want to automate
    • The layers which we want to use together in automation, and how much
  • Automate to what extent for having information and the confidence -- if this sampling works then most data should work in this context of a system?
  • The availability of test data that helps me to evaluate faster and confidently

Let whatever the system have in the underhood that is GraphQL, gRPC, ReST API, or any other technology stack services, one has to work on -- how to make a request; go through the response, and analyze it in context.  Like testing depends on context, automation as well depends on contextIn fact, context drives testing and automation better when it is included.



My Approach to Automate this Use Case


I will not automate the functional flow of this use case entirely on the web UI.  My thought will be to have those tests which are more reliable and the result influencing and driving the decision.

This thought has nothing to do with the Test Automation Pyramid and its advocacy that is to have the minimal number of UI tests at the UI layer and much more at the integration (or service) layer.  I'm looking for what works best in the context and where to have the tests that give me information and feedback so I have the confidence to decide and act.

To start, I identify the below functional tests for the said use case:
  1. Does the endpoint exist and serve?
  2. Assuming it is HTTP,  I see what HTTP methods this endpoint serves?
  3. What does the endpoint serve when it has no data to return?
    • The different HTTP status code this endpoint is programmed to return and not programmed but still returns
  4. What inputs (data, state, and event) does this endpoint need to return the data?
  5. In what format and how the input is sent in the request?
  6. In what format the response will be returned from the endpoint?
  7. Is the response sorted and filtered by the endpoint?
  8. How does the response look when there is no data available for any key?
  9. What if certain keys and their value are not available in the response?  How does it impact the client when displaying the data in a table?
    • For example,
      • No filter data is returned or it is invalid to a consumer to process
      • No sorted data is returned or it is invalid to a consumer to process
      • No pagination data is returned or it is invalid to a consumer to process
      • The contract mismatch between provider and consumer for data returned
        • What the web UI shows in the table data
      • Any locale or environment-specific data format and its conversion when the client consumes the data that is returned by the endpoint
      • The data when sorted by consumer and provider differs
      • The data is sorted on a state by the endpoint and that might change at any time when being consumed by the consumer
      • Is it a one time response or a lazy loading
        • If it is a lazy response, does the response have the key which tells the number of pages
      • and more cases as we explore ...
  10. and more tests as we explore ...

Should we automate all of these tests?  Maybe no per business needs.  Imagine the complexity it carries when automating all these tests at the UI level.  But there are a few cases that need to be automated at the UI level.  

Then, should we to look at the table rows on different pages to test in this automation?  No!  But we can sample and thereby we try to evaluate with as much as minimal data.  This highlights the importance and usefulness of the Test Data preparation and availabilityWhile, preparing the test data is a skill, the using of minimal test data to sample is also a skill.


API Layer Test


I have a straight case here for first.  That is to evaluate:
  1. The key (table header) and its value are returned as expected
    • Is it filtered? 
    • If yes, is it filtered on key what I want?
    • Is it sorted upon filtering?
    • There is no null or no value for a key that needs to have a value in any case
    • The data count (usually the JSON array object), that is the number of rows
    • The page index and current offset value
    • The number of result pages returned by the endpoint
  2. Can I accomplish this with an API test? 
    • Yes, I can and it will be efficient for the given context
  3. I will have five to ten test data which will help me to know if the data is sorted and filtered
  4. Another test will be to receive more than 10 rows and how these data look on filtered and sorted
    • Especially in case of lazy loading
    • I will try to evaluate the filtering and sorting with minimal data
    • I will have my test data available for the same in the system

UPDATE: I missed this point so adding it as an update.  I'm exploring the Selenium 4 feature where I can use the dev toolbar and monitor the network.  If I can accomplish what I can and it is simple in the context, this will help.


UI Layer Test


I have a straight case here as well to evaluate in the given context:
  1. I assume the provider and consumer abides by the contract
    • If not then this is not an automation problem
    • It is a culture and practice problem to address and fix
  2. I assume the data returned data is sorted on the filter; the web UI just consume it to display
    • If not, I will understand why the client is doing heavy work to filter and sort
      • What makes it to be this way?
    • You see, this is not an automation problem; it is a design challenge that can become a problem to product, not jot just for automation
  3. Asserting the data in the web UI table:
    • I will keep minimal data on the UI to assert that is not more than 4 or 5 rows
    • These rows should have data that tells me the displayed order is sorted and filtered
      • Let's call the above 1 and 2 as one test
    • To evaluate pagination that is number of result pages, I will use the response of API and use the same on the web UI to assert
      • Let's call the above another test that is the second test
      • Again, the test data will be the key here
    • To see if the pagination is interactive and navigatable on UI, I make an action to navigate for page number 'n'
      • If it is lazy loading, I will have to think about how to test table refresh
        • Mostly I will not assert for data
          • In testing the endpoint, 
            • I would have validated for the results returned and its length
        • I will assert for number of rows in the table now
      • Let's call it a third test
  4. I will not do the data validations and its heavy assertions on the web UI unless I have no other way
    • This is not a good approach to pick either
    • One test will try to evaluate just one aspect and I do not club tests into one test

Note: The purpose of the test is not check if the web UI is loading the same rows in all pages.  If this is the purpose, then it will be another test and I will try to keep minimal assertion on the web UI.


The Parallel Learnings


If observed, the outcome of automation and its effectiveness is not just dependent and directly proportional to how we write automation.  It is also dependent on:
  • The design of the system (& product)
  • The environment and maintenance
  • The test data and maintenance
  • The way we sequence the tests to execute in automation
  • Where and how we automate
  • The person and team doing the automation
    • The organization's thought process and vision for testing and automation
    • The organization's expectation from testing and automation
    • How, why, and what the people, organization, and customers understand for testing and automation
  • Time and resources for testing and automation
  • The automation strategy and approach
  • More importantly, the system having and providing
    • Testability
    • Automatability
    • Observability


Note: This is not the only way to approach the automation of this use case.  I shared the one which looks much better to the context.




Monday, January 3, 2022

The Automation Strategy Problem; Not a Appium Challenge

 

In The Test Tribe's forum, I read the post which described the problem as in the below paragraphs and picture.  On looking into it, I learned this can be made as a blog post that tells a strategy for automation.  

Maybe, 10 years back I would have asked the same question.  That's a learning curve.  Today as well, I end up in thinking for a while asking self -- how to test it and how to automate it.  

I want to share how this problem can be looked at from the perspective of testing and automation, and then approach it to automate.

 

Folks. I've two issues on Appium automation which needs your help. 

1. I'm working on a ecommerce website where a payment method is integrated (lets take the example as PhonePe). When i try to place the order in the mobile website with payment method as PhonePe, the payment method app will be opened and I've to complete the payment using it and I'm navigated back to the browser. Issue is - How can i switch context between the mobile browser and the app? I tried using driver.startActivity() but on performing any other actions, it errors out. 

2. Since i need to use the browser to place order and the payment using the payment app, I tried to set up the driver instance with browserName and app as the desired capabilities together. But on running the test, it errors out - browserName and app can't be used together. How can i approach this problem? Anyone who has automated such flows?

Apologies, i'm pretty new to Appium and so, please excuse my ignorance.


Picture: Problem Statement - Description of Scenario & Challenge


Understanding the Scenario and Functional Flow

I observe the below in the said scenario:

  1. It is a website; it also has a mobile website
  2. It has got a payment option integrated
  3. The Appium's Desired Capabilities defined has browserName and the app
    • borwserName -- name of the mobile web browser used in automation; it is an empty string if automating an app
    • app -- the path of an app to be automated
  4. When using a mobile website on a mobile device -- assuming it a mobile web app
    • On selecting a payment app -- assuming it a native app
      • The context changes to payment app UI
      • On completing the payment, the context changes to the mobile website



Challenges Described in the Funtional Flow


I see these as challenges:
  • How to handle this said scenario in automation using Appium?
  • How to switch context between mobile browser and the mobile app?
  • Using driver.startActivity(), it yields an error on performing any other actions
    • On making any actions on UI after using the above said method, the error is observed
      • Reading the description, it is said that the error is thrown when running the automation
        • And, when changing the context back to mobile website from payment app
The driver.startActivity(), takes two arguments -- app's package name and activity to be started.  What's passed for the package name and activity name is not clear from the problem description.  

If the mobile browser is used to launch the mobile website and mimic the action, what is passed as app's package name and activity in driver.startActivity() ? This is not mentioned and unclear to me.

Also what is mentioned for the browserName and app in desired capabilities is not clear.



A Common Use Case


In recent years this is a common use case in a mobile native app having a web view and the websites that have payment transactions.  For example, in the native app when making payment, the web view of payment gateway that shows list of payment choices.  On successful payment, the view switches to native view from web view.



Questions on Reading the Problem Statement:


I have the below questions on reading the problem description:
  1. Why did it throw the errors on any actions post calling the driver.startActivity()
    • driver.startActivity() will start an Android activity using package name and the activity name
  2. The context picked on switching from web view to native and then back to native, is not well picked?
    • But it is a mobile website which means it is opened on a mobile browser, right?
      • No where it is mentioned as a Hybird app i.e. the mobile website installed as an app
    • Does this mobile website maintains its context when switching to a native app (payment app), and then changing the context to (web view) mobile browser?
This takes me to seek clarity for:
  1. Is mobile website a installed Hybrid app? Or, is it a regular website which also has a mobile website and accessed on a mobile browser?
  2. Is it possible to switch the context of web page from mobile web browser to native app, and vice versa?
    • I need to explore it; I'm unsure of it
    • When read the desired capabilities, it looks like this can be done
      • That is context switching of mobile web browser to native app, and back to mobile browser from native app is possible
      • I need to explore on the same to be very sure of it

Code Snippets for Context Switching


Refer to this page for details on using the Web view with Appium.  The below code snippets tell how to find the context of web and native views, and switching to it.
Snippet illustrating the change of context to Web view

Snippet illustrating the change of context to Native view



But, What's Actually the Problem?


If automated as described in problem statement, do we end up in a problem?  I see, yes we will end up in a problem:
  1. Need to maintain our automation to make sure it executes the payment app UI anytime
    • If the UI of the payment app changes, we need maintain the code
  2. Do we have stage environment payment app in this case?
    • If we test the mobile website in the stage and make transaction in production payment app,
      • Can we continue as this in each test iterations?
      • If yes, how long can we continue to use production payment app and pay?
      • Will there by any transaction fee charged each time from payment app?
        • Can this become a financial cost to the business and client or to stakeholders?
        • What other cost should I bear for using this approach?

I need to know:

  1. What is that I want to learn from the use case or scenario on automating it?
  2. What would be the impact if the test did not help me to learn what I want to learn from automating this use case and scenario?
  3. Should I be testing the payment app along with my app? 
    • As I write UI automation to handle the web view of payment gateway and then the native payment app, it becomes part of this test.  Should I do that?
  4. What information, risk and problem discovery I miss, if I do not automate the payment app flows?
    • Is it okay for the business and product, if I miss any information here or if I do not test the flow in payment app?
    • How to arrive at this decision?
The decision here need to be rational.  But, being rational alone may not help always.  Can I be reasonable here when I'm deciding or influencing stakeholders when deciding?



This is a Automation Strategy Problem!


If seen, for first this is not a Appium problem.  It is a problem with -- what to automate, how to automate, when to automate, how much to automate, and why automate.   That is, it is a problem with automation strategy on how to approach and execute it.

To me it is a problem to solve with approaching and execution of automation for payment transaction, and not a automation library usage and implementation problem.  



How can I Approach the Automation Here?


I will learn, should this payment scenario be automated on the UI layer for first?  If yes, why?  And, then I will have the below questions
  1. Can I use the developer APIs of payment service to test and complete the transaction?
    • If yes, then
      • Can I use the stage APIs of payment to simulate the transaction flow and its completion?
        • If I just use APIs, I will not know what's the functional experience of transactions in native payment app.  Is this okay?
  2. I and the product I test, do not have control over the payment system and its apps
    • When I have no control over it at any point in time, should I test it as part of my system?  If I did so, should my product as well include the probabilities and complexities of payment system?
      • Having this information is good!
      • But what can I do with that information?
        • Do I have an authority to change or fix payment system with that information?
        • If yes, good; if no, then the time and resource spent on this s a value return to my stakeholders and their business?
    • It is wise to mention that I'm not including and testing the payment system and its transactions as a part of my system
      • Because my system does not have a control over payment system in any means
  3. If the API that is used for initiating transaction is functional and usable, then I do not have to worry technically from functional perspective of transactions
    • We will have to work on -- if the payment initiating web view is functional on my native app and in my website or a mobile website
      • From here the control of payment and any transaction problem that arises are in the realm of the payment system
  4. In the test report
    • I will include the stage payment API request and its response with data
      • Talking to payment app organization, we may get the developer API access on stage to test our system on their stage
      • Talk to payment app organization!
      • Also we can mock the payment API to an extent and in the test report say this is a mock result
        • If relied on mock, then we can miss the change in payment system
        • I will have the mocking as last approach just to complete a business flow and it will not be my pick unless someone wants to see a business flow completion in a test
    • Have a test that tell about functional and usable aspect of the payment page in -- a mobile website and the payment web view in native/hybrid app


Benefits of this Approach

  1. I and my tests will have clarity what is in my control and what not
  2. When I have control, the test and automation can be well maintained
  3. The flaky areas can be identified; I can come to a decision to eliminate it from the automation or not
  4. It helps to identify what is my problem and what is the problem that I don't own in terms of authority
  5. With this approach, the tests and automation provides clarity when we uncover a risk or problem
While I know the benefits, I must also know the cost of having this approach.


Sunday, December 26, 2021

Before Identifying and Listing My Tests

 

I read the below query in TTC's Telegram chat. The discussion had started on this thread and fellow members here were responding.  Further, I read this line and it made me look into it -- "The question was we have to use valid username and password..and perform a negative testcase".



The Default Thinking and Applying Interface

Including me, I see it is subconsciously common for us to approach the problem statement visualization in terms of Graphical User Interface.  When I ask why it is so, maybe it is rooted in our subconscious thinking i.e. with first order and second order or any orders of thinking.

I want to give a try to attempt approaching it by reminding and asking self the below questions:

  1. Is it a GUI specific problem?
  2. Is it a problem that is tied to the context of GUI?
  3. What does this question encapsulate within and open as an interface?
  4. What forms do these interfaces take when I stand out of specific interface?
  5. Should I stick to one interface to learn and attempt this problem?


Identify the Tests and Framing of Tests

We test to learn

  • Does the system do what it is supposed to do and how, why, and when?
  • When the system does not do what it is supposed to do and how, why, and when?
Should I call it Negative Tests?  This is not what I share in this post.

To me, these are tests that help me to learn when the system responds and behaves in the other way than I expected.

I can start to identify the straight use cases for inputting an error (a human introduced error) at a given state/data/event; then look for the behavior of the system.  It is good when we can keep identifying and ideating the use cases.  

We get limited with use cases as we continue to think about use cases.  That said, for sure we will identify and frame the tests within identified use cases.  But, we need tests that help to learn when the system fails in doing what is supposed to do.

To supplement it there is another way, which I use.  I do not say this is the only way to supplement.  I use multiple approaches to supplement and identify the tests.  When I do so, I ask the question to the system with the help of these tests and evaluate the response of the system.


Questions to Identify the Priority Tests


I learn and understand the system each time, to identify the better tests.  And, each time I learn something new about the system that I did not know.  

When I'm asked a question in the interview, I ask for details that help me to test better or to demonstrate my deliverable better.  I will watch the questions that I ask!

If I were the candidate who got this question in an interview, I would ask the below questions.  When I learn this is good enough for the initial tests, I will pause with questions.  I move to identify and frame the tests using the responses I got for the questions that I asked.  

These questions will surely help me to be precise and close to the context that better demonstrates my testing skills.  If it is not close, then there is a problem (or a difference) in my presenting and expectations in the interview.  I will have to address it with the help of the interviewer.

Questions:

  1. What is the interface where I'm entering the username and password?
    • Where is this authentication used?
    • On UI (if so which UI), or CLI, or touch interface, or what is its interface type?
    • At which layer of the system this authentication is used?
  2. Where is the format of username and password?
  3. What is used as Authorization identity on successful authentication?
    • What happens if my authentication is not successful in the UI you want me to test?
    • How do I understand that UI is communicating to me that my authentication is not successful?
  4. How is this authentication processed?
  5. Where the authentication is mapped to authorization and stored for references?
  6. What protocol is used to communicate in authentication?
    • What protocol and communication order is used to grant and revoke authorization?
  7. Who uses this authentication and authorization?
    • To know the different means of doing the same
  8. Is there any other form of authentication that grants me the authorization?
    • Do these different entry points of authentication update my authorization?
    • Will I have different authorization data to authenticate? If yes, how the data, states, and events are maintained for my authentication and account?
  9. What's the language and Unicode supported by this system?
    • Will the languages and Unicode used in the system have any impact when I try to authorize by changing the language and Unicode?  How does the system understand these differences and maintain one state of data with authorization?
  10. Are there any computing differences for authentication and authorization on big and small endian machines?  If yes, how and for what context of the system's behavior, processing, and decision?
  11. Where and how the authentication and authorization details are processed, stored, and presented back.
    • Is there any specific reason for doing it in this particular way?
    • How you have strengthened the authentication process to grant the authorization?
      • For example, 1FA, 2FA, nFA, what else?
  12. Does any other system use your authentication to authenticate and authorize?
  13. Do you use SSO for authentication and authorization?
  14. What testability layer do I have that I can make use of to support and identify the tests?
    • Does this testability layer help me to identify more tests and also classify them?

I can keep generating the questioning like this.  But I will have to pause and start working on what the questions offer me.  

With the help of these questions, I can learn better about the system before attempting to identify the test and frame it.  This also pulls out the risk or problem area if any that looks important and of priority.

I have eased my work to an extent when I know:
  • the target surface area to start my work 
  • what it takes and brings back, and how

In this context, I would have started this way!



Wednesday, August 28, 2019

App Crash! Testing around and inside the crash



In day and out, I come across testers, programmers, managers, and management having efforts on fixing all the crashes. Yes, all the crashes.  In a way I see, if the app did not crash, I will not know the areas that is not being handled well enough.  My testing focus areas will also have tasks noted in such areas to test and learn as much as possible. I do that task provided I can make/given time for it as it is unplanned task.



The common checks to handle crash!?

I learn, an exception if unhandled at runtime, it leads to crash.  There are multiple exception that app can witness which we never thought of during development.  In my initial days of testing, I was in assumption, if we can have -- null checks, index check, illegal argument check, and state check for an activity, we have handled most of the exceptions.  I learned, I'm wrong! How many checks can I write in the code.  I'm not a programmer by job.  I'm a tester. 

I see these checks are not enough and few more got added to my test strategies eventually -- race conditions, unexpected data, wrong data, environment factors, and many more. The collection of these checks is continuing to grow.  Do I cover all these possible crash inducer collections in my testing?  No, I can't and I won't have that luxury of time as well.  Technically, I will learn and prioritize what to use and when.



How the check looks in code, to me?

I write code for automation, which I need to assist my testing.  Here I did write such checks.  At a point of time, I saw, the automation code was full of checks.  Is that a right way?  Definitely not!  A professional and skilled programmer might not do that.  If a programmer has to have such check in each layers of the app architecture, will that sound good?  Personally as a tester, I will not design my tests that way.  As I'm not a programmer, I'm not aware of the pros and cons of doing so. At least I know that is not a better practice to have checks in each layer of architecture of the app.


By handling the exception in my automation code, I print stacktrace of exception.  But will I learn from it to be a better tester?  That's the question I have asked and continue to ask myself.  The exception fix I'm doing, is it stopping me from learning the problem which I have introduced by the actions I'm doing on the app? Is that exception blocking me from learning the underlying problem in my automation code and app?  If yes, then I have a fundamental problem which I have to work upon is my thought.



Why it crashes?

Why at all the app gets into crash? I learn, if the app gets into a state it was not designed for, it will and should crash. As a tester, I will have to learn this state (and such states) at the earliest when I experience crash and on reading the crash stacktrace. I will be happy and not make a fuss about it, if I see a crash for first.  

I learn what is the priority and impact of the crash?  Should I invest my time and test investigate further to provide as much as details to programmer?  Or should I report it with good enough details and continue my testing?  I will answer this question to me.  All I wish is, I don't want the user of the app to experience the crash.  If there is a crash, as a member from development team my intent is to keep it to minimal number having little or no impact.  I see the crash is a great source of learning about my work in the app.

I used to be fussy about crash years back be it on desktop applications, database, web applications and mobile apps.  Now, I have come to point, I love them and it is absolutely okay for an app to crash and it should crash.  What I do post crash in fixing, that tells the bigger story.  In my work, the crashes have made the app better because the team was serious about those crashes.



What on having the crash?

Should the user lose the data what she or he entered on experiencing the crash?  I personally, don't want this to happen for me.  If it happens, I will be annoyed!  That said, how to handle it?  That's something we will have to sit with the programmers and team, for discussion.  

At what point in app, encountering the crash, should we close the app and start over again. At what point in app, it is okay to note the crash and pull the stacktrace, and let continue the user using the app with data entered intact. At what point in app, I should not show UI in view on entering crash and what to show, then resume over safely from there on.  Personally I feel this is a team effort and not just a programmer's effort in making it happen.  



Few testing strategies hack to uncover crash

Here are few things I do and I ask my fellow testers to do when testing mobile apps:

  1. Using the test data which will check the data integrity in app at -- entry point, during processing and post processing.
  2. Identifying the states of the app and passing the invalid states in app at -- entry point, during processing and post processing.
  3. Identify the input which is not from a tester and user. Classify the input on which I don't have control. For example -- the incoming intent; the app responding to APIs (default values, entered values and processed values); the app receiving the response from APIs; the device state; app's activity life cycle state and data/state exchange, and many more as this.
  4. Depending on Android or iOS app, much more strategies can be narrowed down to be specific and work.  At the end, what time I'm left with in the test cycle and what do business want, directs me on what to do.


Debugging and Investigation skills

There are libraries which collects the crash on exception along with other details as -- device info, user info and network info.  I have been with programmers and had difficulty in reproducing the crash and experience it in development environment.  This said, are the logs enough to fix crash?  May be, so we handle the exception and continue the flow of app in runtime.  But did we solve the root problem that caused the crash?  No!  This is where, I feel, the skill of a tester comes in and it is very much needed.

This skill defines me what I'm as a tester and value I bring to the system.