Showing posts with label MVQT. Show all posts
Showing posts with label MVQT. Show all posts

Thursday, November 7, 2024

Functional Testing Is Must In Performance And Security Testing

 

I'm sharing about how I missed to test for functionality while I was immerse focused on testing for performance of a Stored Procedure.  I was unhappy for a couple of days as I missed something that I practiced for years.  

I'm glad for reinforcing this learning with much more awareness into my testing's MVT and MVQT, now.


Context of Testing


A Stored Procedure was optimized for better execution time.  No change in the functionality.  This part of the system is not touched for a long time (years?).  There was no change in functionality here for long time (years?).  The time taken by SP was of concern.  I was asked to test for the optimization.

The complicated area, here, is the test data to use.  It took me days, for identifying and building the test data to test this optimization by mimicking the production incidents, use cases, and data.

When I got the test data ready, it was the fourth day of my testing this change.



Where Did I Go Blind By Being Focused?


The test data that I prepared is solely for the evaluation of the execution time.  This test data helped to test functionality as well.  But, my focus was on evaluating performance not functionality from this test data.

The change in SP did impact the functionality.  I was supposed to use the large data range to test for functionality of this feature which includes two SPs.  But, the task assigned was to test just one SP which is optimized.  I got blind here!  

Are you asking, what is the impact of this functional problem?
  • In the one complete business work flow, this functional problem added the same data into different sets in the subsequent iterations.  Redundant Data -- This is not an expected behavior.

I just spoke performance, traces, data I/O and execution time, because that was a pressing problem.  Why?  That was the objective given to me.  

My testing mission fell short in redefining this objective.  If I had redefined it, I would, have added functionality in the better scale.

If I had redefined it, I would have pulled the other SP into functional testing which is also part of this feature's work flow.  These two SPs are expected to handle the data by eliminating the redundancy.

It was a simple test, but, I did not include/had that in my testing mission that day.



Why Did I Go Blind?


The performance test blinded me for functionality, as I saw the basic functional flow looked functioning.  But, the data count was going wrong when a bigger data range is used in the context.  

See here, how stupid I was in my testing!  I'm testing for a SP that has a change as part of its optimization for execution time.  I never brought the functional testing in.  Why?  I focused on the testing objective.

I just looked into one SP that is optimized.  I did not look the other SP which has to work along with this SP later to complete functional flow of the feature.  Why?  How is that even possible?  I was asking myself this.  I see, this is okay from the perspective of the testing objective I had.  But, not okay from the perspective of a test engineer who is supposed to think the impact and prevent the problems.

My immersed and concentrated focus on performance and its related activities on a SP for four long days did not let me see this.  



What Am I Saying Here?


While I have tested for DBs and ETL systems for years, I did not use my learning here.  What is that learning?
When there is a change in any part of the ETL, SP or DB of a system, testing for the functionality for the business workflow is equally important.  Vary the data dimensions and evaluate the counts.

I was completely hooked into the execution time and the test data while switching between the environments for four days.  The chaos in data between environments is something that misleads easily.  I fell to it this time.

I say to myself, if it is a fix for the performance optimization or a security [or any quality criteria], testing for functionality is equally important and of priority as running the tests for performance or security.

When a DB layer is picked for fixing and optimization, testing for functionality in a equal scale is must.  There is a change in the code or/and infrastructure and it has to be noted with additional attention.

To add on this, this time, I did not go through and analyze the SP.  I took this call from the test team.  This call of me costed and had a major part in letting me not to think of functionality.

My fellow colleague ran a test with varying data size by completing the business workflow and observed the problem, and informed me.  I give the credit to Sandeep.

If I had brought this performance test under the automation, I would not have done this.  Why?  I will evaluate and assert for each data returned for different sizes.  I did not automate here and there was no need for it in this context.

Redefine the testing objective that you have got; it helps when you see the model of a system and test.


Respect all the fix and suspect all the fix.  This helps in a longer run!  



Tuesday, November 28, 2023

Behind the Every Test Data, There is a ?!

 

Read this blog post to have a perspective about the Test Data and Test Data Management.  The point is, if I'm not aware of a test and what does it tell me to explore, I cannot think of a Test Data.

That said, if I know what I should be evaluating as part of performance, why, when and how, this will help me to come up with a thought for identifying the tests and its test data for the same. 

The ninth question from season two of 100 Days of Skilled Testing is:

What role does data management play in performance testing, and how do you ensure the availability of suitable test data sets?


Testing and "Ensure"

We test and have tests in testing, because, there is no "sure" and "ensure" idea in software.  But, we presume on a rational basis upon, "if, these are this", in a given context when the software processes.

Now, ask yourself, how can we ensure the availability of suitable test data sets?

In my opinion, the Test Data is often misunderstood.  This is the primary problem and should be the first problem, when asked "what are the challenges in creating the test data?".

When you read the concluding lines of this blog post, you will learn why I say this.


Test Data and Immunity

In my opinion and experience in practicing the Test Engineering, I see, the Test Data should be a viral strain and it should have its variants.  When this test data is used to test [experiment, test investigate, and debug], how do the software and its ecosystem respond?

  • Does the software and its ecosystem is immune to this test data?
    • Does it exhibit any risks and problems?
      • If yes, then, do the purpose of my testing and automation is accomplished with this test data?
This puts me back to question, what is the purpose [intent] of my test?  It drives me to derive the test data which helps me to know -- What am I supposed to learn and on priority?  With this, I get an idea for what kind of test data I should be creating knowing its pattern.

If the system is immune to Test Data and not reveling anything new in the context, I classify this pattern of test data as "Immune" to the context.

In my practice and research work in Test Engineering and Software Testing, to start, I categorize Test Data into two areas.
  1. Immune
  2. Not Immune
Further, I have categories, under these two, where I classify the Test Data deterministically for the context.   Get in touch if you want to learn more about this.  I'm just one ping away!

The tests should help me to evaluate for the immunity and also non-immunity; both are essential and necessity.  

The credit is to me for such classification of Test Data.  It is my research work out of my practice.

Note that, Test Data is not just the input [characters or files] entered or given to a system.  Test Data has its association to tech stacks, infrastructure, ecosystem, business workflows and people.  To craft such Test Data, one has to have the understanding of the system and its internals, and, the problem it solves by knowing how it solves.



Performance Testing and Test Data

  1. What is that I'm testing as part of performance?
  2. What do I want to evaluate in the name of performance?
  3. What part of the system is evaluated for its performance?
    • Should I evaluate this in isolation or as a wholeness of the system?
  4. What domain knowledge and information I should have when testing for performance?
  5. What system's architecture and internal details I should understand and be aware to test for performance?
  6. Is this the first delivery?  Or, do we have this system running in the production?
    • If it is first delivery,
      • How will I create the test data to suit the consumers of this application?
      • What are the key workflows of business that we should be evaluating for its performance?
      • Do all workflows and sub-systems need the evaluation for performance, and on priority?
      • How do I map the fragmentation of users and their data [with its patterns]?
      • What are the infrastructure and ecosystem characteristics that should be part of the test data identified?
      • Does caching have any effect if the same pattern of data is used?
    • If it is a running version in production
      • Can I refer to the DB to figure out the pattern for the particular workflow that I'm evaluating?
      • How can I match the test data to have the production data's characteristics and attributes?
  7. What is the backup strategy for the Test Data?
    • How do I version control the Test Data?
    • Which version of the Test Data I should be using?
  8. What is the threshold I'm targeting with Test Data?
    • What should be the size of the data in DB when I make the IO and RW operations?
    • What should be the network capability when I make the IO and RW operations?
    • What should be the hardware capability when I make the IO and RW operations?
    • What should be the geographical traffic and its pattern when I make the IO and RW operations?
    • More of such factors will be considered when identifying and deriving the test data.
  9. What is the client error yielding Test Data that I should have for the workflow?
  10. What is the server error yielding Test Data that I should have for the workflow?
  11. What is the redirection yielding Test Data that I should have for the workflow?
  12. What is the no-response and no-change Test Data that I should have for the workflow?
And, more.  It is simple; get in touch to discuss and know beyond the listed.



To conclude and stop here, all these questions, do not ensure or assure or make sure that I will have test data for evaluating a characteristic of performance.
  • It helps me to know:
    • What are the tests I should be doing?
    • What kind of preparation I should be having in my practice to create the Test Data for these tests?

The, Test Data should challenge the available Testability and its limits.  If it is not doing, then, we are having a test data no doubt about it; but, it is of shallow. Shallow!?

One has to ask self, "Is this sufficient enough and effective Test Data for the system [and workflow] I'm testing?"

The, Test Data should drive the engineering team to add more layers of Testability into the system.




Sunday, November 19, 2023

Waterfall or Agile: Testing for Performance - Where to Start?

 

Do you understand the Agile?  I have shared my understanding here; give it a read.

The eighth question from season two of 100 Days of Skilled Testing is:

Can you share some best practices for conducting performance tests within an Agile development environment?


Best Practices and the Agile


The irony is, the Agile says, there is no best practice.  It asks, to tailor and fit the practice to the context so the continuous delivery and value is delivered consistently upholding the Agile's principles.  

Yet, we talk about the best practices in the Agile's context, like the eighth question asked here.

What is the effective way to test in the continuous delivery?

As a test engineer, how can I start thinking and testing for performance from the inception of a feature's thought?  I see, it is not hard to do so.  As you read further in this post, you will have a perspective and awareness to do it.


Performance in Waterfall and Agile

I learn, the performance is an experience.  It does not differ because of the Waterfall or Agile.  If the performance is not a pleasing experience, it will impact stakeholders no matter it is Waterfall or Agile.

But, the question when evaluating for the performance is -- where to start, when to start, how to start, and with what to start?

As of today, I do not see differences in the mindset and skills one has to have for testing of performance in Waterfall and Agile.  Could be the approach differs in certain phases here; otherwise, I see the same in both practices.

I will rephrase the eighth question to this:
What is your practice to evaluate the performance right from the start of product development in your project?
I do not want to wait until to hear -- the development is completed and deployed; now we can start running the performance tests.

What can I do as part of performance tests from the first day of development and first commit?  This is my intent and area to look in strategizing the testing and tests.



The Culture of Engineering

At the start and end of the day, when we developers start and finishes the work,

  • How the work is done and why, is defined by the engineering culture practiced by that organization.
    • The Performance Engineering of the software products and solution being built will be driven the by the culture practiced.

The Test Engineering and how we test and automate will be driven by the culture of engineering practiced in the organization.

Writing the code not just for building the functionality, but, also for performance is a culture driven factor.  The organization's culture for engineering practice drives it!



Testing for Performance - Where to Start?


I'm sharing my research work that I'm doing and experimenting on performance engineering and performance tests.  I'm seeing the results and value out of it and so are the stakeholders.

Today, we are getting skilled in exploring and testing without the requirement document and SLAs in hand.  Isn't it?  Haven't you?

I use my MVPT to figure out what are the minimum performance tests for the feature.  As part of this, I will explore with help available aids to evaluate the performance.

To start, I will use these questions to figure out the performance tests:
  • What is the minimum viable questioning performance tests that you have got to test this feature?
  • What is the minimum viable questioning performance tests that you have got to test this workflow?


Unit Tests for Time and Space Complexity


I will work closely with programmers to gather information on below when the code for the feature is committed as part of Unit Tests.
  • The execution time taken by the code of that feature - the Big O Notations for space and time complexity
    • Usually the Unit Tests focuses on functional tests and clean code practice
    • But, when we test team ask and push for performance data, this can come as part of Unit Tests
      • An architect or a principal engineer can set an expectation on
        • What should be the time and space complexity of a code for a feature?
          • Each functions and blocks need to be evaluated on this
          • As said earlier, this depends on a engineering practice culture of an organization
            • If the culture wants it, it will be there; else just the functional code will be delivered and not the performance code
      • If the time and space complexity analysis outcome is not as expected, the code written has to rethought and refactored
        • The review process need to put it back
        • The comment with data has to be published
          • This will be useful to model the performance tests by test engineers who will be working on performance tests
      • Doesn't it look a like a effective useful practice as part of Performance Engineering right in the early stage?
        • This is very well applicable to projects running on Agile or Waterfall

Do you have this in your project and Unit Tests written?

The time and space complexity questions should not be confined just to the SDETs [test engineer] interview.  A test engineer has to ask for it and apply it in her or his day-to-day work.


Profiling Tests by Test Engineers


We testers do not get into product's code analysis.  We have to build skill to run the profiling on product's code and analyze the resources data.
  • Test Engineers can test the feature's code with the help of IDE's profiling (runtime analysis) and collect the performance data by identifying the performance bottlenecks
    • This runtime analysis can profile for
      • Memory snapshots
      • Thread analysis
      • Monitoring resources
      • CPU and allocation profiling
      • And, more
      • The problems and risks can be reported upon analysis
    • Compare the two different solution's approach performance data
This information will tell and indicate where is the risk and problem when we deploy the code.  In my opinion, this is a useful information in modeling further performance tests.  This information is first-hand information which is very powerful before we start using any other performance testing strategies and tools to aid the tests.



Get Started with Performance Engineering and Tests


These are available in the IDE.  We think of performance testing tools and ask how to test for performance.  To be precise, we test developers (test engineers) should change our mind and shift for first.  If not, as I say, we will be the bottleneck for first to ourselves.  Did you know this way of testing for performance?  Why not you introduce this in your project and organization?

If seen, these test practices can be used right from the day we commit the feature's code. This is a place to start for the performance tests.   This will be a differentiator together with MVPT and guides the MVPT to design effective performance tests in the context.

I do not say these are best practices and there is no best practices.  But this is a useful practice when the organization and stakeholders ask for it.  Let your organization and stakeholders know how well you can test for performance right from first commit of product's code.

To stop and end here,
  1. Just do not test for functionality from day one, also test for the performance from the day one.
  2. Influence your organization's engineering culture and developers not just for developing functional code, but, also for the performance code




MVQT: The Testing and Tests with a MVP's Perspectives


I was leading multiple teams and its delivery in a testing service company.  Then, I came up with this thought -- Like MVP, I also have the MVT (Minimum Viable Tests) for a MVP.

Further, I expanded this thought in my day-to-day practice on tailoring to different contexts. I'm observing that it is applying well to the different contexts when I tailor it to the contexts.  After experimenting it for 10 years, I'm sharing this as a blog post.


What is a MVP?

I take this from Eric Ries.  It looks simple and precise to me.

The Minimum Viable Product is that version of a new product which allows a team to collect the maximum amount of validated learning about customers with the least effort.

I see this technique [and a concept] can be applied to anything when I'm developing.  As a test engineer, I develop the tests and test code in major as part of my testing.  On applying the idea of MVP to my testing and deliveries, I see the value and result.

Reading this blog post of me to know who is a developer.


Testing, Tests, MVP and MVQT

In software test engineering, I see the MVP as Minimum Viable Questioning Tests.


The Minimum Viable 'Q' Tests (MVQT) for a focused area of a feature [or to a feature]

  • Helps me to identify the priority tests that should be executed for first
  • Allows me to learn information on priority which matters critically to product and stakeholders
    • So that a informed decision can be made.


The Q in MVQT stands for "questioning".  I read it as Minimum Viable Questioning Tests.  I see the "Q" as a placeholder for the Quality Criteria.  That is, MVFT means Minimum Viable Questioning Functional Tests to a feature or a workflow.




The MVQT are key to know:

  • Have I identified and designed the priority tests?  How do I know that I have got them?
  • Did stakeholders get the information which they wanted to know on priority?
  • Did MVQT help me to
    • Explore and know what I wanted to know about a feature or a workflow?
      • How fast I was here to know and learn this?
      • How did I develop my tests incrementally?  Did I?  If not, then, is it a MVQT?
  • Did MVQT help me to know
    • Am I aligned and in sync with expectations of my stakeholders and customers who are using the software product I'm testing and automating?
  • Did the MVQT help me 
    • In collecting the critical information in a given context for the scope of testing and automation?
    • Do the learning and outcome from this MVQT help to reinforce the validated learning of customer?
  • Do MVQT result support the outcome of Unit Testing result?

The tests in MVQT has to be consistently revised and evaluated to keep it as a MVQT.  Note this, not all tests are MVQT.  If the number of MVQT is growing to a part of feature or to a feature, it is time to think about what is MVQT for you.

The "minimum" tests are highly effective and it helps me learn and test better technically and socially.



MVQT and Testing

  • Sanity or Smoke Tests
    • The set of MVQT which helps me learn can the build be taken further testing
  • MVFT - Minimum Viable Questioning Functional Tests
    • Apply this to a feature or a workflow or to that part which can be evaluated with minimum tests for its functionality
      • To update is this aligning to the validated learning of customer [stakeholders]
  • MVPT - Minimum Viable Questioning Performance Tests
  • MVUT - Minimum Viable Questioning Usability Tests
  • MVAT - Minimum Viable Questioning Accessibility Tests
  • MVTxT - Minimum Viable Questioning Tester's Experience Tests
  • MVST - Minimum Viable Questioning Security Tests
  • MVAF - Minimum Viable Questioning Automation to a Feature
  • MVLT - Minimum Viable Questioning Localization Tests
  • MVUIT - Minimum Viable Questioning UI Tests

You add more of this to your list and context.

In a way, MVQT should ask and look for the testability, automatability and observability.  If this is not happening, then there is no possibility of saying I have got my MVQT.

More importantly, in the CI-CD ecosystem, MVQT pays a major role.  If I should have my tests in the  CI-CD pipeline, then, the MVQT is the way and it focuses on a targeted area to evaluate it.  Else, it is hard, impractical and not possible to test in CI-CD eco system by delivering continuously.


Ask and Review for MVQT

Ask for MVQT, when you review these:

  • test strategy, test framing, test design, test ideas, test cases, test plan, test architecture, test engineering, testing center of excellence, and test code

For example,

  • What is the minimum viable questioning performance tests that you have got to test this feature?
  • What is the minimum viable questioning performance tests that you have got to test this workflow?
  • What is the minimum viable questioning security tests that you have got to test this feature?
  • What is the minimum viable questioning GUI tests that you have got to test this feature?
  • What is the minimum viable questioning contract tests that you have got to test this end point?
Likewise, What is the minimum viable questioning automation tests that you have got to test this feature?

Ask how these tests qualify as MVQT in this context of testing and automation?

This should help you to see how effective is the test strategy in a given context.

Importantly, the MVQT and its effectiveness is a testability to test your tests.



The Credit is to Me

I'm not sure if the idea what I'm saying here in this blog post is practiced by other test engineers.  I have not seen this being discussed about it in public forum.  I have not come across it in my awareness and to the exposure I have put myself.

Hence, I will take this credit to me.  Giving the credit honestly is not a common sight and practice.  I have not got my due credits for using the ideas, thoughts and work that I have come up with.

So, I make it as a open letter and call out that credit for this idea, thought, concept, and practice will be to me when you listen, use and practice it.