Showing posts sorted by date for query testability. Sort by relevance Show all posts
Showing posts sorted by date for query testability. Sort by relevance Show all posts

Monday, December 30, 2024

Testing Debt -- It Exists and Hits Every Day in All Environments


As an engineer on the team, I see the discussion and short conversations on Technical Debt.  No matter what, there will be a Technical Debt in the software system we build and deliver.

Likewise, when there is a Technical Debt, there will be a Testing Debt for sure.  Identifying and learning the magnitude and impact of the Testing Debt is part of my job.


My Understanding of Technical Debt


What is a Technical Debt?

  • It is the cost of additional rework caused by choosing the quickest solution rather than the effective solution.
  • Making decisions based on speed above all else is one factor that leads to the Technical Debt.
  • Technical Debt has pros and cons.
  • The unintended Technical Debt will not come to notice immediately or sooner.
    • This is one of the challenges we have to deal with.
      • Because intentional technical debt is what we are aware of from the decision made.
      • The impact of unintended technical debt has to be managed by every team involved in the software development.



Starting With The Testing Debt


I don't know if the term Testing Debt exist in the industry!

I have been using the term "Testing Debt" for the last 9 years in my practice.
  • I use this term to share and keep stakeholders informed about the rework which we have to do as a result of Technical Debt.  
  • On reading, what is Technical Debt, can we relate to what is Testing Debt?
  • Most times, the testability, automatability and observability characteristics of a system will be affected as a consequence of Technical Debt.
  • Further, to compound the cascading effect, the Testing Debt will come in.
    • One of the major impact due to the Testing Debt is the change in the Deterministic capability seeded with a test
      • Reworking on this deterministic capability is not simple and straight in all cases for the change introduced in the business operations, tech layer and infrastructure.

When I say, there is a Technical Debt arising from what we are doing and delivering, it also means, there is a Testing Debt that is created as a consequence.
  • To speak in terms of engineering, the testing is part of technical activity.
  • It looks funny to me when the one portrays the testing team as non-technical and label with terms.
    • For example, manual, repeatable, repetitive, etc.  

It is a cycle and repeatability to an extent.  Repeatability is one part in the engineering cycle and process.




Do You Know Your Testing Debt?


To remind you and me, the testing is sampling!  How do you sample when you have a growing Technical Debt and Testing Debt?

I will share one of the common Testing Debt which we everyone of us will have and I assume so.  

The ask for in-sprint automation and automate everything.  Is this a fair ask and practical?  This is not a line to discuss here.  But, I will tell you how we testing team will be hit by the Testing Debt in delivering for this expectation set by business and stakeholders.

The Technical Debt leads to rework sooner or later in the engineering and it will lead to major rework in the Test Engineering.  Isn't it?  

Say you have worked to build the testing infrastructure, regression suite, automation suite and integrated with the pipeline.  Now, there is a rework and change in the system as a fix for few Technical Debts.  
  • Does this effect your testing infrastructure, regression suite and automation suite that you have in place?  
    • Do we have [or given] enough time and resources for Testing Team to work and fix this and keep the pipeline running seamlessly?
  • This is not just about time and resources!
    • For the change in tech stack and engineering of the system, the Test Engineering has to adapt to it and challenge it.
      • If the Test Engineering does not challenge the engineering of the software system, we are not in a position to see the perspectives of risks and problems
      • Eventually, we will not even be in place to learn what are the no-risk perspectives and aspects of the system
  • How do I manage this so that I can turn around quickly to the context and keep the pipeline smooth?  
    • This is a challenge to every Test Engineer who faces the effect of Technical Debt.
We Test Engineers witness and experience the Technical Debt which also includes Testing Debt in different forms and intensity.  

Wait!  As you read this blog post, did you try to think of the intended Testing Debt in your project?

What are the unintended Testing Debt and its impact?
  • This is something not easy to identify and learn, while we can identify and learn the unintended Technical Debt to some extent.
    • This will exist; it will impact and hit all the environments, everyday!



Testing Debt and Test Engineer


Note this, building a Test Engineering solutions to withstand the effects and changes from a Technical Debt is a skill.  It is possible.
  • For this, the Technical Debt should not be damaging the deterministic attribute which is seeded to a test.
  • Picking and integrating such a deterministic layer within the layer of testability, automatability and observability is a mark of a skilled Test Engineer and Test Engineering.
Technical Debt and Testing Debt are not the same.  But, the Testing Debt is the outcome of a Technical Debt and has a relation to it.  

Testing does not drive the change in how the system is implemented; or, I have not experienced it so far.  Wait!  The outcome of the testing can and has changed how the system is implemented.  These two sentences are not the same.



To end here, how do I manage myself with all these debts?  We are also in the job where we have to grow together by talking, negotiating and solving the debts that are the outcome of decisions from business and stakeholders.



Saturday, February 3, 2024

Performance Testing - The Unusual Ignorance in Practice & Culture

 

I'm continuing to share my experiences and learning for100 Days of Skilled Testing series.  I want to keep it short and as a mini blog posts.  If you see, the detailed insights and conversations needed, let us get in touch.


The ninth question from season two of  100 Days of Skilled Testing is

What are some common mistakes you see people making while doing performance testing?  How do they avoid it?


Mistakes or Ignorance?

It is mistake when I do an action though I'm aware that it is not right in the context.

I do not want to label what I share in this blog post as mistake.  But, I call it as ignorance despite having or not having the awareness, and the experience.

The ignorance said here is not just tied to the SDLC.  It is also tied to the organization's practice and culture that can create problems.

To this blog post's context, I categorize the ignorance in these categories -- Practitioner and Organization.

  1. Practitioner's ignorance
    • Not understanding the performance, performance engineering, and performance testing
      • When said performance testing, taking it as - "It is load testing"
      • No awareness on what is performance and performance engineering
        • Going to the tools immediately to solve the problem while not knowing what is the performance problem statement
      • Be it web, API, mobile or anything,
        • Going to one tool or tools and running tests
      • No much thinking on how to design the tests in the performance testing being done
      • Ignoring Math and Statistics, and its importance in Performance analysis
      • No idea on the system's architecture, and how it works
        • Why it is the way it is?
      • The idea of end-to-end is extended and used in testing for performance and having hard time to understand and interpret the performance data
        • How many end-to-end your tests have identified?
        • Can we test for performance to all these identified and unidentified end-to-end?
      • Relying on the resource/content in internet and applying or using it in one's context without understanding it
      • No idea on the tech stack and how to utilize the testability offered by it in evaluating the performance
      • Not using or asking for testability
      • Getting hung to most spoken and discussed 2 or 3 tools on the internet
      • Applying tools and calling out it as performance testing
      • No attempting to understand the infrastructure and resources
        • How it impacts and influences the performance evaluation and its data
      • Idea on Saturation of resources
        • Thinking it as a problem
        • Thinking it as not a problem
      • Not working to identify where will be the next bottleneck when solving a current bottleneck
      • What to measure?
      • How to measure?
      • When to measure?
      • What to look when measuring?
      • Not understanding the OS, Hardware resources, Tech Stacks, Libraries, Frameworks, Programming Language, CPU & Cores, Network, Orchestration, and more
      • Not knowing the tool and what it offers
        • I learn the tool everyday; today, it is not the same to me compared to yesterday
          • I discover something new that I was not aware of what it offered and exist
          • I learn the new ways of using the tool in different approaches
      • No story in the report with information/image that is self-describable to most who reads it
      • And, more; but the above said resonates with most of us
  2. Organization's ignorance
    • At the org level, for first and to start, it is ignorance in Performance Engineering
      • Ignoring the practice of performance engineering in what is built and deployed
      • Thinking and advocating, increasing the hardware resources will increase and better the performance
        • In fact, it will deteriorate over a period of time no matter how much the resources are scaled up and added
      • Ignoring the performance evaluation and its presence in CI-CD pipeline
      • The performance tests on CI-CD pipeline should not take beyond few minutes
        • What is that "few minutes"?
      • Not prioritizing the importance of having the requirements for Performance Engineering

Recently, I was asked a question - How to evaluate the login performance of a mobile app using a tool "x"?

In another case, I see, a controller having all HTTP requests made when using web browser. Running these requests and trying to learn the numbers using a tool.


I do not say this is wrong way of doing.  That is a start.

But, we should NOT stay here thinking this is a performance engineering and that is how to run tests for learning a performance aspect[s].


To end, the performance is not just - how [why, when, what, where] fast or slow?  If that is your definition, you are not wrong!  That is a start and good for start; but, do not stick on to it alone and call performance.   It is capability.  It is about getting what I want in the way I have been promised and I expect; this is contextual, subjective and relative.  The capability leads to an experience.  What is that experience experienced?

Sometimes, serving the requests by what you call as slow, is a performance.  What is slow, here?

The words fast and slow are subjective, contextual and relative.  It is one small part of performance engineering.

That said, let me know, what have you been ignoring and unaware in practice of Performance Engineering & Testing?


Tuesday, November 28, 2023

Behind the Every Test Data, There is a ?!

 

Read this blog post to have a perspective about the Test Data and Test Data Management.  The point is, if I'm not aware of a test and what does it tell me to explore, I cannot think of a Test Data.

That said, if I know what I should be evaluating as part of performance, why, when and how, this will help me to come up with a thought for identifying the tests and its test data for the same. 

The ninth question from season two of 100 Days of Skilled Testing is:

What role does data management play in performance testing, and how do you ensure the availability of suitable test data sets?


Testing and "Ensure"

We test and have tests in testing, because, there is no "sure" and "ensure" idea in software.  But, we presume on a rational basis upon, "if, these are this", in a given context when the software processes.

Now, ask yourself, how can we ensure the availability of suitable test data sets?

In my opinion, the Test Data is often misunderstood.  This is the primary problem and should be the first problem, when asked "what are the challenges in creating the test data?".

When you read the concluding lines of this blog post, you will learn why I say this.


Test Data and Immunity

In my opinion and experience in practicing the Test Engineering, I see, the Test Data should be a viral strain and it should have its variants.  When this test data is used to test [experiment, test investigate, and debug], how do the software and its ecosystem respond?

  • Does the software and its ecosystem is immune to this test data?
    • Does it exhibit any risks and problems?
      • If yes, then, do the purpose of my testing and automation is accomplished with this test data?
This puts me back to question, what is the purpose [intent] of my test?  It drives me to derive the test data which helps me to know -- What am I supposed to learn and on priority?  With this, I get an idea for what kind of test data I should be creating knowing its pattern.

If the system is immune to Test Data and not reveling anything new in the context, I classify this pattern of test data as "Immune" to the context.

In my practice and research work in Test Engineering and Software Testing, to start, I categorize Test Data into two areas.
  1. Immune
  2. Not Immune
Further, I have categories, under these two, where I classify the Test Data deterministically for the context.   Get in touch if you want to learn more about this.  I'm just one ping away!

The tests should help me to evaluate for the immunity and also non-immunity; both are essential and necessity.  

The credit is to me for such classification of Test Data.  It is my research work out of my practice.

Note that, Test Data is not just the input [characters or files] entered or given to a system.  Test Data has its association to tech stacks, infrastructure, ecosystem, business workflows and people.  To craft such Test Data, one has to have the understanding of the system and its internals, and, the problem it solves by knowing how it solves.



Performance Testing and Test Data

  1. What is that I'm testing as part of performance?
  2. What do I want to evaluate in the name of performance?
  3. What part of the system is evaluated for its performance?
    • Should I evaluate this in isolation or as a wholeness of the system?
  4. What domain knowledge and information I should have when testing for performance?
  5. What system's architecture and internal details I should understand and be aware to test for performance?
  6. Is this the first delivery?  Or, do we have this system running in the production?
    • If it is first delivery,
      • How will I create the test data to suit the consumers of this application?
      • What are the key workflows of business that we should be evaluating for its performance?
      • Do all workflows and sub-systems need the evaluation for performance, and on priority?
      • How do I map the fragmentation of users and their data [with its patterns]?
      • What are the infrastructure and ecosystem characteristics that should be part of the test data identified?
      • Does caching have any effect if the same pattern of data is used?
    • If it is a running version in production
      • Can I refer to the DB to figure out the pattern for the particular workflow that I'm evaluating?
      • How can I match the test data to have the production data's characteristics and attributes?
  7. What is the backup strategy for the Test Data?
    • How do I version control the Test Data?
    • Which version of the Test Data I should be using?
  8. What is the threshold I'm targeting with Test Data?
    • What should be the size of the data in DB when I make the IO and RW operations?
    • What should be the network capability when I make the IO and RW operations?
    • What should be the hardware capability when I make the IO and RW operations?
    • What should be the geographical traffic and its pattern when I make the IO and RW operations?
    • More of such factors will be considered when identifying and deriving the test data.
  9. What is the client error yielding Test Data that I should have for the workflow?
  10. What is the server error yielding Test Data that I should have for the workflow?
  11. What is the redirection yielding Test Data that I should have for the workflow?
  12. What is the no-response and no-change Test Data that I should have for the workflow?
And, more.  It is simple; get in touch to discuss and know beyond the listed.



To conclude and stop here, all these questions, do not ensure or assure or make sure that I will have test data for evaluating a characteristic of performance.
  • It helps me to know:
    • What are the tests I should be doing?
    • What kind of preparation I should be having in my practice to create the Test Data for these tests?

The, Test Data should challenge the available Testability and its limits.  If it is not doing, then, we are having a test data no doubt about it; but, it is of shallow. Shallow!?

One has to ask self, "Is this sufficient enough and effective Test Data for the system [and workflow] I'm testing?"

The, Test Data should drive the engineering team to add more layers of Testability into the system.




Sunday, November 19, 2023

MVQT: The Testing and Tests with a MVP's Perspectives


I was leading multiple teams and its delivery in a testing service company.  Then, I came up with this thought -- Like MVP, I also have the MVT (Minimum Viable Tests) for a MVP.

Further, I expanded this thought in my day-to-day practice on tailoring to different contexts. I'm observing that it is applying well to the different contexts when I tailor it to the contexts.  After experimenting it for 10 years, I'm sharing this as a blog post.


What is a MVP?

I take this from Eric Ries.  It looks simple and precise to me.

The Minimum Viable Product is that version of a new product which allows a team to collect the maximum amount of validated learning about customers with the least effort.

I see this technique [and a concept] can be applied to anything when I'm developing.  As a test engineer, I develop the tests and test code in major as part of my testing.  On applying the idea of MVP to my testing and deliveries, I see the value and result.

Reading this blog post of me to know who is a developer.


Testing, Tests, MVP and MVQT

In software test engineering, I see the MVP as Minimum Viable Questioning Tests.


The Minimum Viable 'Q' Tests (MVQT) for a focused area of a feature [or to a feature]

  • Helps me to identify the priority tests that should be executed for first
  • Allows me to learn information on priority which matters critically to product and stakeholders
    • So that a informed decision can be made.


The Q in MVQT stands for "questioning".  I read it as Minimum Viable Questioning Tests.  I see the "Q" as a placeholder for the Quality Criteria.  That is, MVFT means Minimum Viable Questioning Functional Tests to a feature or a workflow.




The MVQT are key to know:

  • Have I identified and designed the priority tests?  How do I know that I have got them?
  • Did stakeholders get the information which they wanted to know on priority?
  • Did MVQT help me to
    • Explore and know what I wanted to know about a feature or a workflow?
      • How fast I was here to know and learn this?
      • How did I develop my tests incrementally?  Did I?  If not, then, is it a MVQT?
  • Did MVQT help me to know
    • Am I aligned and in sync with expectations of my stakeholders and customers who are using the software product I'm testing and automating?
  • Did the MVQT help me 
    • In collecting the critical information in a given context for the scope of testing and automation?
    • Do the learning and outcome from this MVQT help to reinforce the validated learning of customer?
  • Do MVQT result support the outcome of Unit Testing result?

The tests in MVQT has to be consistently revised and evaluated to keep it as a MVQT.  Note this, not all tests are MVQT.  If the number of MVQT is growing to a part of feature or to a feature, it is time to think about what is MVQT for you.

The "minimum" tests are highly effective and it helps me learn and test better technically and socially.



MVQT and Testing

  • Sanity or Smoke Tests
    • The set of MVQT which helps me learn can the build be taken further testing
  • MVFT - Minimum Viable Questioning Functional Tests
    • Apply this to a feature or a workflow or to that part which can be evaluated with minimum tests for its functionality
      • To update is this aligning to the validated learning of customer [stakeholders]
  • MVPT - Minimum Viable Questioning Performance Tests
  • MVUT - Minimum Viable Questioning Usability Tests
  • MVAT - Minimum Viable Questioning Accessibility Tests
  • MVTxT - Minimum Viable Questioning Tester's Experience Tests
  • MVST - Minimum Viable Questioning Security Tests
  • MVAF - Minimum Viable Questioning Automation to a Feature
  • MVLT - Minimum Viable Questioning Localization Tests
  • MVUIT - Minimum Viable Questioning UI Tests

You add more of this to your list and context.

In a way, MVQT should ask and look for the testability, automatability and observability.  If this is not happening, then there is no possibility of saying I have got my MVQT.

More importantly, in the CI-CD ecosystem, MVQT pays a major role.  If I should have my tests in the  CI-CD pipeline, then, the MVQT is the way and it focuses on a targeted area to evaluate it.  Else, it is hard, impractical and not possible to test in CI-CD eco system by delivering continuously.


Ask and Review for MVQT

Ask for MVQT, when you review these:

  • test strategy, test framing, test design, test ideas, test cases, test plan, test architecture, test engineering, testing center of excellence, and test code

For example,

  • What is the minimum viable questioning performance tests that you have got to test this feature?
  • What is the minimum viable questioning performance tests that you have got to test this workflow?
  • What is the minimum viable questioning security tests that you have got to test this feature?
  • What is the minimum viable questioning GUI tests that you have got to test this feature?
  • What is the minimum viable questioning contract tests that you have got to test this end point?
Likewise, What is the minimum viable questioning automation tests that you have got to test this feature?

Ask how these tests qualify as MVQT in this context of testing and automation?

This should help you to see how effective is the test strategy in a given context.

Importantly, the MVQT and its effectiveness is a testability to test your tests.



The Credit is to Me

I'm not sure if the idea what I'm saying here in this blog post is practiced by other test engineers.  I have not seen this being discussed about it in public forum.  I have not come across it in my awareness and to the exposure I have put myself.

Hence, I will take this credit to me.  Giving the credit honestly is not a common sight and practice.  I have not got my due credits for using the ideas, thoughts and work that I have come up with.

So, I make it as a open letter and call out that credit for this idea, thought, concept, and practice will be to me when you listen, use and practice it.



Monday, October 16, 2023

Performance & Tests: Getting Started and Data Analysis

 

On running tests,

  • We will have data (information) as one of the byproduct.
  • Analyzing the data of the integrated sub-systems in isolation and correlation,
    • It will lead us to a technical analysis on each integrated system.
In the report, we draft this analysis along with actions to be taken.

Note: When said sub-systems do not ignore or skip the client or consumer; the system does not comprise just server.


No Golden Rule

There is no one way to do a testing.  Likewise, there is no one way or the golden rule to test for performance.  It is contextual and depends on what I want to learn.

In fact, in few contexts, we can have a value adding performance test with just one request.  Just, I should be well aware of -- what is that I want to know and learn from this test.

That said, there are multiple interfaces where we can observe, analyze and learn from the performance data collected.

The fourth question from season two of 100 Days of Skilled Testing, is:

What are your favorite hacks to analyze performance testing results and find anomalies?

Well, this question do not mention explicitly if it is for server or client or database or caching or messaging or for what interface of a system.  It is a question; but, to me it looks too generic and at a point it looks vague.  Having said this, that is how the learning journey and curve starts! 


Result vs Report

What is a result?

  • Is it an evaluation after a data [information] is put to scrutiny?
  • Or, the result is a data that is collected and not yet interpreted?

It depends on individual or team and how it being practiced.

The result is different from a report.


Getting Started and Data Analysis


I should know how the system architecture is designed and orchestrated with its boundaries and interfaces.  This helps a lot.  What kind of architecture is this?  Is it a monolith?  If it is monolith, my approach to test for performance differs.

If I'm asked to start the analysis of data for a system that I'm not aware of,
  • I will start by analyzing the below indicators on knowing the architecture and the orchestration of the sub-systems for critical business workflows
    1. CPU usage
    2. RAM usage
    3. Data I/O
    4. Network usage
    5. The Heat and sound dissipated from the hardware which holds and binds
      • CPU, RAM, Data I/O, Network and tech stacks installed and configured

It hints me to look further and test investigate, when I observe:
  • Having a steady consumption
    • What is steady consumption in this context?
  • Having a low consumption
    • What is low consumption in this context?
  • Having a unusual consumption spike and fall of it
    • I follow the pattern to study further
    • What is considered as knee, spike and fall, in this context?
  • Having a zero consumption
  • Having a maximum consumption
    • What is maximum consumption in this context?
Having a high consumption doesn't mean a problem.  Likewise, having a low consumption does not mean all is well.  I have to uncover them to learn what it means in the given context.

In each of this, there will be a pattern.  I will learn them.  I will correlate with other sub-systems and learn what they were doing in the said timeline.

Do you recollect this line -- "the architecture should provide the Testability"?
  • I wrote about it in one of the blog posts of Performance Engineering.

I refer to the below by traversing with the timeline,
  • The logs by asking for it
  • Data recorded
  • Any APMs that are in place
I correlate all these with above said indicators.

This gives me a start. It is one of easiest start that I can have to get started with analysis.


Well this is to analyze at the server end.  What about the client [consumer] end?  It is simpler and will share in the coming blog posts.



Do you want to know more on this and other strategies that can be used contextually?  Let us get connected and converse.  I'm happy to share and learn on listening to you.  It is fun and awareness!



Thursday, October 5, 2023

Architecture: The Common Shared Understanding -- Part 1

 

When we are developing a software system, the requirement from a stakeholder is not 'Fast' or 'Scalable'  or  'Responsive'.  But, the stakeholder needs it and expects it.  If you see, on a larger picture, the software system development and maintenance is a job of balancing too.


When a Software Architect [Technical Architect and Test Architect], works on architecting the software system and testing for the same,

  • It is about balancing the technical aspects with the business's requirements from stakeholders.  
    • Do you see that?


Knowing the architecture of a software system and testing of same is one of a primary task for engineers on the project.  Because, we software engineers have to balance it well.  Balance, what? Balancing the technical aspects together with business's requirements from stakeholders.


This blog post is part of 100 Days of Skilled Testing series.  The second question posted for season 2 is,

How important is the understanding of application architecture to do performance testing better?

 

What is an Architecture?

In context of Software Development & Engineering, the word "architecture" is one of the ambiguous words among the teams in a project and an organization. 

As a test engineer,

  • Did you ever had a discussion or arguments or debate with programmer and architect?
  • I had such discussions and I continue to have it today as well in the projects that I work.
    • This is to know and understand
      • What I should be doing as an engineer for first and as a Test Architect in the role?


The outcome of this discussion showed me,

  • We all did not have a common understanding of it
    • We did not share,
      • "What I understood for the architecture and this architecture?"

The primary goal of a Software System's Architecture is,
  • We all engineers on a project have a same understanding of it, in the aspects it exists for.
  • This understanding is arrived after we have put our thoughts into scrutiny and decided that we stick on to it, so that,
    • We can balance well between the technical requirements and business expectation.
Are you with me, so far?



A Software System's Architecture is,
  1. A common shared understanding of what we all have for,
    •  What we are developing, testing, and to about maintain?
      • And, Why? Who? When? Where? How?
  2. Represents the boundaries and interfaces of what matters,
    • That is [to be] orchestrated, designed, implemented, how it communicates, and what it will have, and not.
    • It also can show how the teams are structured and how the team and organization is organized.
      • For example, in the Service Oriented Architecture, the teams are built and structured with respect to the service they offer.
    • It is a model that is better than other models in a given context of technical requirements and business needs
  3. The context and awareness for,
    • Why it is the way it is?
      • The cost and value for being so.
    • What to do when it has to be changed? Where to change?
      • How simple and quick to change?
        • What are the cost and value for being so?
    • How can I monitor and observe all these consistently?
  4. The Gateway of Testability - it tells what is the Testability available for,
    • Letting know what is critical and priority to test
    • Where, How, When, and What tests can be framed, designed, and executed? To what extent?
      • Why these tests?
      • If an architecture does not talk about and do not have the Testability, we have a serious problem!  This has to be fixed for first on priority.
        • An architecture has to provide the Testability and Programmability scope and opportunities to develop a software system that is of value!

For today, this is my understanding for the "Architecture".

I'm a Test Architect in the role and I expect myself to be an hands-on engineer for first.  It is a necessity for an architect to be an hands-on engineer.




Note: Read the Part 2 of this blog post here.


Architecture: Its Aid in Performance Engineering -- Part 2

 

I hope, you and I have the common shared understanding for the word "architecture".  If not read this blog post and come here to know about the dots.


Do I Know the Dots?

Before connecting the dots, I should know,

  • What are the dots and how to identify them, where and when? 
  • Who can help me in doing so?

In Software Security & Engineering, we use a Threat Model to,
  • Identify the risks, surface area, tests and to develop the payloads. 
  • A software system's architecture will help in developing and improvising the Threat Model consistently.

I see, the same for software system's Performance Engineering & Testing.  To test better for the performance,
  • I need to identify the dots, risks, surface area, tests, payloads, monitoring aid and correlation of all these.
  • The understanding of software system's Architecture is a necessity to do so.
    • But, what are the dots here?

The dots can be identified when I know how to use the Testability provided by the architecture.  This leads me to evaluate the performance for the boundaries and interfaces in isolation and as a whole, and then correlate.


With this, it puts me to question - What is the performance of this architecture?
  • I did not say the performance of software system; I said, the architecture.  
  • There should be some characteristics to identify and evaluate the performance engineering models offered by the architecture. 
    • What are they?  

The architecture's characteristics helps,
  • To identify and distinguish the dots in ease and to test better.
    • How can I test for the performance aspects of a software system using the architecture's characteristics?
    • How do I identify these characteristics in the architecture of software system?



The Characteristics of an Architecture


Last year I read an article from ByteByteGo System Alliance.  This article flashed in my mind as I read the question,
How important is the understanding of application architecture to do performance testing better?
This is one of the article which I refer to identify the performance characteristics of the architecture.  I refer to the cheatsheet shared in this article for my references.

In few projects and organizations that I have worked, most of these characteristics were put into practice and production environment.  I monitored them in usual traffic and unexpected traffic.  The feel is something that I cannot describe in words; I want to experience it.



Performance Engineering Aided by Architecture


Which characteristics of architecture is associated to the which boundaries and interfaces of a software system?  Knowing this, helps you and me in thinking - What has to be tested in performance for this interface in this boundary?

I want to share my work experience here.  But, I see, if I share something which we all can relate to, it will be of help in knowing - Why it is important to know about the architecture to do the performance testing?

Below one is a recent use cases from software industry for the same.
  • I will not explain in detail; but, I will bring the key points to the context of this blog post


Amazon Prime's Audio-Video Monitoring Service Moving to Monolith Architecture

What I and You Should Know:
  • The complete Amazon Prime system did not move to Monolith.
    • The Prime Video's Audio-Video Monitoring Service moved to Monolith.
      • Why?
        • This monitoring system which was orchestrated with a Microservice architecture did not scale after a limit
        • Problem Statement:
          • Say, the Prime Audio Video Monitoring Service expected a load of active 100 concurrent users streaming movie Kantara in Kannada audio.
          • After the 6th user started streaming the same video [in the same audio or different audio language], this monitoring system did not scale to include other 95% of concurrent user. 
          • As a result, the Prime Audio Video Monitoring Service slows down [or stops,] and eventually the video streaming to the active concurrent users will take a hit. 
          • This monitoring service is important so that each user gets a video and audio of the agreed upon quality and streaming.
What I understand is,
  • This monitoring service was continued in production while the team looked for better solution in performance with the given architecture.
  • While it did so, the cost of having this architecture was high when it had to scale up.
  • Looks like Prime Video business beared this cost for sometime is what I see.
But, the online streaming business cannot settle and agree to pay high cost, while it is planning to stream the live sports action in coming days.  A need came to look into performance characteristics in the being used monitoring service's architecture.

It eventually re-orchestrated the existing components with a new architecture in place.
  • It moved from the distributed microservice system to a monolith system, where the spawning of Amazon Step Function (error detector clusters) happened vertically.
  • Along with this, the architecture of this monitoring service was placed in a way, such that, most of its components came into one process.
  • Thus, eliminating the S3 bucket as immediate storage for video frames (as images) and audio files.
  • This architecture helped the creational, behavioral, structural and functional characteristics of Prime Video's Audio-Video Monitoring service.

Prime Video says upon testing for performance and changing to monolith by rearranging the existing components,
  • It saved 90% of the cost.
  • 90% of cost for Amazon, is what in the numbers if it is in Indian Rupees? 
  • How much a tester gets paid if just 1.5% of this 90% is paid as a salary per month?
    • I will leave this to your thoughts and calculation.

If I know the architecture and where to look for what characteristics, it helps me to think of right performance tests for the context.  

The Hostar's emoji introduction in live cricket matches during 2018 and its consistent improvisation in processing for performance is a good use case, to the question -- Why it is important to know about the architecture to do the performance testing?




To conclude, architecture cannot be ignored in Testing.  It plays a critical role for aiding and identifying the testability and BCFS (behavior, creational, functional and structural) characteristics of a software system.


Wednesday, February 15, 2023

When to Start the Automation in Software Testing?


The Question of a Decade?

Today, when I draft this post, the calendar date is 14th February 2023.  If I look back to 10 years ago and ask myself what are the questions in and around Software Testing and Automation, I see this question.  What is that question?

When to start the automation?

We answer, hear, read, and discuss this question, today too!

Often, the opinion that comes out is, ".... to automate when it is stable".  Note that it is an opinion, not an answer or a fact accepted universally.


To Start Automation when it is "Stable"!?

My learning is,

Do not think of starting the automation when it is "stable".

The "stable" is an assumption we tend to believe by the outcome of using the system.  The binaries are never "stable".

The binaries appear to not show any risks and problems for the way one is using it in a context.  To be more precise, we are not seeing the risks and problems that binaries are showing us in other dimensions.  That is, the dimensions that we are not aware of or the dimensions that we are not focusing on.


When to Start the Automation?

I learn,

Start the automation, when the system is testable!

This leads to me the questions:

  • What is testable?
  • When it is testable?
Understanding testability helps me to learn and identify its child attribute -- automatability.  That is, understanding testability helps me to learn the order of "testable".

Testability does not mean "stable".  Testable does not mean "stable".

But the assumption "stable" means there are some characteristics of testability, automatability, and order of testable.

Automate when you learn, it is testable, and identify a layer of testability.  This helps to pick the better seam [that is the appropriate layer(s)] for automation in a given context.

Keep the automation structure ready, so that intent of a test can be expressed via code as we identify [a layer of] testability.

Maybe, this is what people say LEFT or SHIFT LEFT or START LEFT.  Or, could be out of the SHIFT LEFT BOX!



Thursday, November 17, 2022

Testability Revisited

 

I read the below question on The Test Chat's Telegram group.

When you start working on a project, what steps do you take to establish the testability of the product?

This question is helpful in learning how we see the Testability of a product.  It is a common perception to see the product with testability and then to test the product using the testability.

But, in reality, the testability is associated more with the tests; the tests which are used to test a product being developed or developed.

So, when we talk about testability, we need to be more aware of the test that we will be designing to test the software.  This test should be quick and easy to execute with the help of the programmed or available testability factors and their attributes.

You can find more blog posts in and around testability here in Testing Garage.  Testability in software engineering and systems is one of my research areas.


Testability


I understand Testability as

  • How easy it is easy to test by an engineer
    • In a given context and skills of an engineer
    • With the being used test approach and strategy

Note: The context can keep including factors as we add more and continue to test

It is not about if one can test the software or not.  It is all about software that is easily tested.  How easy?  That is one of the testability factors in software design and programming.



Test and Testability


Unless I know the test, I will not be certain about the Testability.  Testability does not drive tests.  It aids the execution of the test and it is a heuristic.  If the test is designed well to the context and if the testability is used well in the test's context, the execution of a test can be quick and easy.

The tests
  • make use of available Testability
  • helps to strengthen the Testability
  • add more Testability in different seams/layers of the engineering and product

From here it will be two ways; the tests and testability will complement each other.  Further, it leads to developing and including more specific and deterministic tests and testability types in respective seam/layers.



Testability and Automatability


Testability can be classified further into several categories.  Based on the purpose and what to evaluate we will have to identify Testability in respective categories and need to use it.

As a software engineer one is bound to think testability with software programming and infrastructure.  But, testability in software engineering is not just bound to software programming.

The testability is diverse and available across engineering activities.  It is used in all engineering activities.  Maybe, for a software engineer who is hands-on with programming and testing, they infer Testability most times with programming and infrastructure.

I see, the Testability always exists to an extent.  But, can it be identified and used in the way I approach, design, execute and evaluate my test?  That is the point to explore.

If it is testable to some extent, then we are using some Testability attribute(s).

If there is Testability in a seam/layer, then there is an Automatbility in that seam/layer to an extent.

If it is automatable then there is some attribute of Testability in that seam/layer.  Again, the question comes to knowing and learning -- What am I testing and automating? Why? How? When? Where?

This discovers seams/layers to test and automate.  It leads to identifying the tests.  Then, to identify and build more Testability and Automatability.

A written program feature essentially will have an automatable characteristic and space.  If it is automatable, then it is making use of and extending the testability.

In summary,
  • Know the test to know and identify the Testability better
  • Know the Testability to automate better
  • Know the Automatability to assist your testing better.


Context-Free Questions to Identify Testability


To know and have better Testability, here are a few things that I will want to know:
  1. What is the test?
    • What am I testing and what am I supposed to test?
    • How is this test designed to learn and evaluate the system?
    • What are the data, states, and events that I'm experimenting, exploring, and experiencing as I test this system with help of this test?
    • How can I make this testing quick and easy?
      • What should I use to make my testing quick and easy with this test?
        • How should I use it to make my testing quick and easy with this test?
        • When and where should I use it to make my testing quick and easy with this test?
  2. Why am I testing this?
  3. What happens if I test this and do not test this?
  4. What is the value loss I will incur if not tested?
  5. What is the value loss I will incur if I do not understand and learn the outcome of the test?
  6. What changes the dimensions of my tests?
  7. How can I learn the product better from this test?
  8. What information am I learning from this test?
  9. What information, heuristics, and Oracle help me and stakeholders to analyze and decide better?
  10. Do I actually know the product from the perspectives of
    • tech
    • business
    • user
    • risks
    • problems
    • protocols
    • guidelines
    • environment
    • money
    • benefits
    • exploitations
    • team developing it
    • and, more that I can add to the context of the product and project


To summarize, know the test and know how the test is designed.  It helps to identify better testability at the right layer/seam of the software system and engineering.  If there is no effective testability at that seam/layer, it helps to build one.  That way, the automatability also gets built in that seam/layer if the team collaborates well.



Monday, September 12, 2022

Testability: More About it from the Programming Literature

 

My friend Parimala Hariprasad gifted me the book Essential Skills for The Agile Developer, authored by Alan Shalloway, Scott Bain, Ken Pugh, and Amir Kolsky.  Thank you, Parimala, for gifting this book.  I'm experiencing the value of this book and using it.

In this post, I'm sharing the content shared in Chapter 3 of this book. It is about Testability and how it improves the code quality.  


Why this Blog Post?

I continue to read Software Testing literature.  I understand the below as one of the primary key skills for a Software Test Engineer practice:

  • Identifying the Testability attribute in the system
  • Mapping and classifying how the available Testability attribute can be used in Tests
  • Asking for the Testability attribute
With that, I understand "how easy it is to test by a test engineer in a given context" as Testability.  If noticed, this is from Software Testing literature.  And, I see it has these three elements which tell the prominence of each:
  • How easy it is to test?
    • what factors make it easy to test?
    • how does it make it easier?
    • how does it bring the deterministic character?
    • how can I isolate the observations with my analysis with the help of deterministic character and aid added?
  • By a Test Engineer's
    • awareness, experience, learning, applying the skills, and more
  • In a given context
    • time, people, environment, availability, and more
If any of these three elements has trouble, it has its effects on the test and testing.  If you ask what effects, I don't know.  If I pick from my case to share one of the effects, I say, I was not very sure what was happening though the product looked to do what is expected.  But will it continue to do what is expected to do and in what all ways? I had no answer for in what all ways and in what contexts. This is one such case of how the absence or not using the Testability can influence the tester to be unsure about the learning made with help of a test.

The book I mentioned here gives another perspective from the Computer Programming literature.  It talks at the fundamental level and I see this is important to understand for we Test Engineers.  Soon in the coming days, we Test Engineers will be working and testing in these layers of product development. 

In the next section, I will share the lines from the book as is in italics and blue font color word.  The credits are to the authors of this book. I'm taking the text as it is from this book.  And, I will share my interpretation for the same and see the relativity of Computer Programming and Software Testing literature.  

Note: The credit is to James Bach for the Testability definition used above.  I added "the tester and context" to it as these two influence the Testability and outcome of using the Testability to a greater extent.


Testability and Code Quality

The authors of the book say, "testability is highly correlated to the code qualities we want to manifest, in particular, loose coupling, strong cohesion, and no redundancy."  Further, they illustrate how one remarks at the start of testing one's code by saying the below:

I can't test this code; it does too many things that are so interwined -- weak cohesion

I can't test this code without access to dozens of other things -- excessive cohesion

I can't test this code; it's been copied all over the place, and my tests will have to be duplicated over and over again -- redundancy

I can't test this code; there are too many ways for external objects to change its internal state -- lack of encapsulation


Then I read this line from the authors, "Gee, I wish they had thought of how this code was going to be tested while they were writing it!".  That's a question that every one of us has to ask ourselves for the work we deliver and not just for the programming.  

Alan Shalloway says he is kind of slow sometimes because it took him some time to realize this -- I should consider how my code is going to be tested before writing it! 

Testability is related to loose coupling, strong cohesion, no redundancy, and proper encapsulation.  Another way to say this is:

  • the tighter your coupling, the weaker your cohesion; 
  • the more your redundancy and the weaker your encapsulation, the harder it will be to test your code
Therefore, making your code easier to test will result in the loose coupling, strong cohesion, less redundancy, and better encapsulation.  This leads to a new principle -- Considering how to test your code before you write it is a kind of design.

Since testability results in so many good code qualities and since it is done before you write your code, it is a very highly leveraged action.  That is, a little work goes a long way; it is a great trim tab.


I and Testability


I try to understand and learn about Testability every day in my practice.  When I started my career 15 years back, I learned from my network, that one of our fellow testers in the community that is Meeta Prakash did her Ph.D. in Testability.  I wanted and still want to read the thesis of Meeta Prakash.  I hope she will find it and give me soon, one day.  In those days, I referred to the slides of James Bach on RST; that legacy slides that had contents filled with blue color. 

From there, I tried looking into the testability in what I test and what programmers deliver to me.  When I worked with Moolya in 2012, I realized from my practice -- context and the skill sets of a tester matter to make use of the available testability and to identify if it is present or not, and to what extent. I added this to the definition of James Bach and I shared the same with my fellow testers with whom I was mentoring and working together.


Relating the Literature and Interpretation


When the programming is talking about testability, I see it is talking about:
  • internal aspects of how it is programmed and to test the same easily in isolation, and in integration for the context while being deterministic
The words used to express in programming literature are more programming oriented.  Whereas, what we see in the Software Testing literature, it is more of a common man's words.  But, what both means is the same and the difference between them is to which layer and aspect they are referring and how, and why.

The Weak Cohesion
  • It will be an obvious experience to a tester when it is difficult to speculate and pull a particular observation with more information for a feature or a user flow
    • For example, if the Refresh Token is used along with Auth Token everywhere, then it will be tough to isolate when Refresh Token is used and when the Auth Token is used
I feel the same when wanting to test a piece of code in isolation from other code.  I have experienced this when testing one aspect of utility or a complete utility in isolation from the rest of the automation code.


The Excessive Cohesion
  • I could not test the mobile apps as I needed data
  • Certain data came from a portal that is also under development and depends on APIs to work
  • APIs would be under development till the last day of release and did not deliver the endpoints to the portal and mobile apps team
So how could the test team create data to test for mobile apps, web portal, and for APIs themselves?  If you see this is excessive cohesion at the product development level.


The Redundancy
  • In one project, I had to login each time to see the status of a session
    • All tests were programmed in a way that I should login each time
  • The test team used the login function in every test and it was duplicated
  • When signed in, the Auth token got changed which lead to difficulty in debugging and isolating the problem
This complicated the test code and also messed up debugging.  The tests could not be deterministic here.

I see a static Auth token or one-time login and using the same Auth token in all other tests in the suite could have helped to debug the problem and where it occurred.


The Lack of Encapsulation
  • My team had a tough time when started to use an existing automation
  • It had public access modifier for all methods in all packages
    • The team picked up and authored more tests that changed the data and states
  • This led to any object of a method to modify the data or state; it was not supposed to be modified at all
  • The debugging led us here and it was not a problem with the product
    • It was the problem with the product's automation code and how the tests changed data and state;  it was in turn used in other tests
This led to much more chaos as the automation and testing environment were the same.  The invalid bugs, meetings that got scheduled to discuss and time went into the meeting that ended with no use, and a couple of releases came into a decision should we deploy or not, and more.



Continuing the Unlearning and Learning of Testability


If you see, Testability has got multi-dimensions in the dynamics of software development.  Testability is not just about Programming and Testing.  It can be from the environment, project, people, what we understand and how we use it further in work, and the business itself.

I continue to unlearn and learn testability every day as I practice testing and automation. 



Wednesday, January 26, 2022

Automation Strategy - How to Automate on Web UI Table for the Data Displayed in Table

 

Use Case to Automate and a Problem Statement


I read the below question in The Test Tribe's Discord community server.  As I read, I realized, we all go through it when we think of automating a use case.  The credit of this question is to Avishek Behera.


Picture: Description of a use case to automate and a problem statement

Here is the copy-paste of the use case and problem statement posted by Avishek Behera:

Hello everyone, here is a use case I came across while having a discussion on automating it.

A webpage has a table containing different columns,let's say employees table with I'd, name, salary , date, etc

It also has pagination in UI, we can have 20 rows in one screen or navigate to next 20 records based on how many total records present,it could go about 10 + pages and so on....


Problem statement:

How to validate the data displayed in table are correctly displayed as per column header , also correct values like names, amount etc. Use case is to validate data.

The data comes from an underlying service, different endpoints basically.

Now it's not about automation but about right and faster approach to test this data.

What are different ways can we think of?

I know this is a basic scenario but since I was thinking of different possible solutions.

One way my friend suggested to use selenium, loop through tables ,get values ,assert with expected. Then it is time consuming, is it right approach just to validate data using selenium?


 These are the useful attributes of this question:

  1. It had the preset of context and the context information to the reader
  2. The availability of context information gave an idea of
    • What would API look like
    • The request and type
    • The response and details
    • The consumer of this API
  3. It helped to imagine and visualize how the data would be interpreted by consumers to render
  4. I get to see what Avishek is looking for in the said context


Interpreting the Use Case and Problem Statement


What it is?
  • Looks like the consumer is a web UI interface
    • Mention of Selenium library supports this interpretation
  • The response has a data which is displayed in the table kind of web UI
  • There can be no data to multiple rows of data displayed in the table
  • Pagination is available for this result in the UI
    • Is pagination available in the API request and response or not, this is not sure from the problem description
    • 20 rows are shown on one page of a table
    • The number of pages in the table can be more than one
    • The response will have a filter flag
      • I assume that data displayed in the table can be validated accordingly
    • The response will have the data on the number of result pages
      • This makes the result in a page to be of fixed length
      • That is 20 results on each page and I cannot choose the number on the UI
      • The response will have the offset or a value that tells the number of records displayed or/and returned in the response
  • Is it a GET or POST request?
    • This is not said in the problem description
    • But from the way the problem is described, it looks like a GET request
    • But should I assume that it is an HTTP request?
      • I assume it for now!
  • I assume the data is received in JSON format by the consumer
  • I assume the data responded by the endpoint or the service, are sorted and returned
    • The consumer need not process the response, sort, filter, and display
    • If the consumer has to process the response, then filter, sort and display, 
      • it would be a heavy operation on the client and the client-side automation for this use case

If it is other than an HTTP request, it should not matter much.  The underlying working and representation may remain something similar to HTTP requests and responses unless the data is transferred in binary format.



Automation Strategy for the Use Case


The key question I ask myself here is:
What is the expectation from automating this use case?

How I automate and where I automate is important but it comes later on answering the key question. The key is in knowing:
  • What is the expectation by automating this use case? 
  • What am I going to do from the outcome of this automation? 
  • What if the outcome of the automation gives me False Positive information and feedback?

These questions help me to see:
  • How should I weigh and prioritize the automation of this use case?
  • How should I approach automating this use case to be close to precise and accurate with deterministic attributes?
  • What and whose problem am I solving from automating this use case?

It gives me the lead to the idea of different approaches for automating the same; helps in picking the best for the context

That said, the use case shared by Avishek Behera is not a problem or a challenge with the Selenium library or any other similar libraries.  Also, it is not a problem or a challenge with libraries used in the automation of web requests and responses.



Challenges in the Problem Statement


I do not see any problem in automating the use case.  But there are challenges in approaching the automation of this use case.

On the web UI, if I automate on data returned, filtered, sorted, and displayed, it is a heavy task for automation.  Eventually, this is a very good candidate for soon to be a fragile test.  

Do the below said are the expectation from automation of the use case?
  • To have a fragile test
  • To have high code maintenance for this use case
  • To do high rework in the automation when UI of the web change
  • To complicate the deterministic attribute of this use case automation
If these are not the expectations, then picking an approach that has lower cost and maintenance is a need.

The challenges here are:
  • It is an Automation Strategy and Approaching challenge
  • It is a sampling challenge
    • Yes, automation at its best is as well a sampling, not just the testing
  • It is about having better data, state, and response which helps to have accuracy in the deterministic attributes of automation
    • To know if it is a:
      • true positive
      • false positive
      • true negative
      • false negative
      • an error
      • not processable
  • The layer where we want to automate
    • The layers which we want to use together in automation, and how much
  • Automate to what extent for having information and the confidence -- if this sampling works then most data should work in this context of a system?
  • The availability of test data that helps me to evaluate faster and confidently

Let whatever the system have in the underhood that is GraphQL, gRPC, ReST API, or any other technology stack services, one has to work on -- how to make a request; go through the response, and analyze it in context.  Like testing depends on context, automation as well depends on contextIn fact, context drives testing and automation better when it is included.



My Approach to Automate this Use Case


I will not automate the functional flow of this use case entirely on the web UI.  My thought will be to have those tests which are more reliable and the result influencing and driving the decision.

This thought has nothing to do with the Test Automation Pyramid and its advocacy that is to have the minimal number of UI tests at the UI layer and much more at the integration (or service) layer.  I'm looking for what works best in the context and where to have the tests that give me information and feedback so I have the confidence to decide and act.

To start, I identify the below functional tests for the said use case:
  1. Does the endpoint exist and serve?
  2. Assuming it is HTTP,  I see what HTTP methods this endpoint serves?
  3. What does the endpoint serve when it has no data to return?
    • The different HTTP status code this endpoint is programmed to return and not programmed but still returns
  4. What inputs (data, state, and event) does this endpoint need to return the data?
  5. In what format and how the input is sent in the request?
  6. In what format the response will be returned from the endpoint?
  7. Is the response sorted and filtered by the endpoint?
  8. How does the response look when there is no data available for any key?
  9. What if certain keys and their value are not available in the response?  How does it impact the client when displaying the data in a table?
    • For example,
      • No filter data is returned or it is invalid to a consumer to process
      • No sorted data is returned or it is invalid to a consumer to process
      • No pagination data is returned or it is invalid to a consumer to process
      • The contract mismatch between provider and consumer for data returned
        • What the web UI shows in the table data
      • Any locale or environment-specific data format and its conversion when the client consumes the data that is returned by the endpoint
      • The data when sorted by consumer and provider differs
      • The data is sorted on a state by the endpoint and that might change at any time when being consumed by the consumer
      • Is it a one time response or a lazy loading
        • If it is a lazy response, does the response have the key which tells the number of pages
      • and more cases as we explore ...
  10. and more tests as we explore ...

Should we automate all of these tests?  Maybe no per business needs.  Imagine the complexity it carries when automating all these tests at the UI level.  But there are a few cases that need to be automated at the UI level.  

Then, should we to look at the table rows on different pages to test in this automation?  No!  But we can sample and thereby we try to evaluate with as much as minimal data.  This highlights the importance and usefulness of the Test Data preparation and availabilityWhile, preparing the test data is a skill, the using of minimal test data to sample is also a skill.


API Layer Test


I have a straight case here for first.  That is to evaluate:
  1. The key (table header) and its value are returned as expected
    • Is it filtered? 
    • If yes, is it filtered on key what I want?
    • Is it sorted upon filtering?
    • There is no null or no value for a key that needs to have a value in any case
    • The data count (usually the JSON array object), that is the number of rows
    • The page index and current offset value
    • The number of result pages returned by the endpoint
  2. Can I accomplish this with an API test? 
    • Yes, I can and it will be efficient for the given context
  3. I will have five to ten test data which will help me to know if the data is sorted and filtered
  4. Another test will be to receive more than 10 rows and how these data look on filtered and sorted
    • Especially in case of lazy loading
    • I will try to evaluate the filtering and sorting with minimal data
    • I will have my test data available for the same in the system

UPDATE: I missed this point so adding it as an update.  I'm exploring the Selenium 4 feature where I can use the dev toolbar and monitor the network.  If I can accomplish what I can and it is simple in the context, this will help.


UI Layer Test


I have a straight case here as well to evaluate in the given context:
  1. I assume the provider and consumer abides by the contract
    • If not then this is not an automation problem
    • It is a culture and practice problem to address and fix
  2. I assume the data returned data is sorted on the filter; the web UI just consume it to display
    • If not, I will understand why the client is doing heavy work to filter and sort
      • What makes it to be this way?
    • You see, this is not an automation problem; it is a design challenge that can become a problem to product, not jot just for automation
  3. Asserting the data in the web UI table:
    • I will keep minimal data on the UI to assert that is not more than 4 or 5 rows
    • These rows should have data that tells me the displayed order is sorted and filtered
      • Let's call the above 1 and 2 as one test
    • To evaluate pagination that is number of result pages, I will use the response of API and use the same on the web UI to assert
      • Let's call the above another test that is the second test
      • Again, the test data will be the key here
    • To see if the pagination is interactive and navigatable on UI, I make an action to navigate for page number 'n'
      • If it is lazy loading, I will have to think about how to test table refresh
        • Mostly I will not assert for data
          • In testing the endpoint, 
            • I would have validated for the results returned and its length
        • I will assert for number of rows in the table now
      • Let's call it a third test
  4. I will not do the data validations and its heavy assertions on the web UI unless I have no other way
    • This is not a good approach to pick either
    • One test will try to evaluate just one aspect and I do not club tests into one test

Note: The purpose of the test is not check if the web UI is loading the same rows in all pages.  If this is the purpose, then it will be another test and I will try to keep minimal assertion on the web UI.


The Parallel Learnings


If observed, the outcome of automation and its effectiveness is not just dependent and directly proportional to how we write automation.  It is also dependent on:
  • The design of the system (& product)
  • The environment and maintenance
  • The test data and maintenance
  • The way we sequence the tests to execute in automation
  • Where and how we automate
  • The person and team doing the automation
    • The organization's thought process and vision for testing and automation
    • The organization's expectation from testing and automation
    • How, why, and what the people, organization, and customers understand for testing and automation
  • Time and resources for testing and automation
  • The automation strategy and approach
  • More importantly, the system having and providing
    • Testability
    • Automatability
    • Observability


Note: This is not the only way to approach the automation of this use case.  I shared the one which looks much better to the context.