Showing posts with label Fail. Show all posts
Showing posts with label Fail. Show all posts

Thursday, November 7, 2024

Functional Testing Is Must In Performance And Security Testing

 

I'm sharing about how I missed to test for functionality while I was immerse focused on testing for performance of a Stored Procedure.  I was unhappy for a couple of days as I missed something that I practiced for years.  

I'm glad for reinforcing this learning with much more awareness into my testing's MVT and MVQT, now.


Context of Testing


A Stored Procedure was optimized for better execution time.  No change in the functionality.  This part of the system is not touched for a long time (years?).  There was no change in functionality here for long time (years?).  The time taken by SP was of concern.  I was asked to test for the optimization.

The complicated area, here, is the test data to use.  It took me days, for identifying and building the test data to test this optimization by mimicking the production incidents, use cases, and data.

When I got the test data ready, it was the fourth day of my testing this change.



Where Did I Go Blind By Being Focused?


The test data that I prepared is solely for the evaluation of the execution time.  This test data helped to test functionality as well.  But, my focus was on evaluating performance not functionality from this test data.

The change in SP did impact the functionality.  I was supposed to use the large data range to test for functionality of this feature which includes two SPs.  But, the task assigned was to test just one SP which is optimized.  I got blind here!  

Are you asking, what is the impact of this functional problem?
  • In the one complete business work flow, this functional problem added the same data into different sets in the subsequent iterations.  Redundant Data -- This is not an expected behavior.

I just spoke performance, traces, data I/O and execution time, because that was a pressing problem.  Why?  That was the objective given to me.  

My testing mission fell short in redefining this objective.  If I had redefined it, I would, have added functionality in the better scale.

If I had redefined it, I would have pulled the other SP into functional testing which is also part of this feature's work flow.  These two SPs are expected to handle the data by eliminating the redundancy.

It was a simple test, but, I did not include/had that in my testing mission that day.



Why Did I Go Blind?


The performance test blinded me for functionality, as I saw the basic functional flow looked functioning.  But, the data count was going wrong when a bigger data range is used in the context.  

See here, how stupid I was in my testing!  I'm testing for a SP that has a change as part of its optimization for execution time.  I never brought the functional testing in.  Why?  I focused on the testing objective.

I just looked into one SP that is optimized.  I did not look the other SP which has to work along with this SP later to complete functional flow of the feature.  Why?  How is that even possible?  I was asking myself this.  I see, this is okay from the perspective of the testing objective I had.  But, not okay from the perspective of a test engineer who is supposed to think the impact and prevent the problems.

My immersed and concentrated focus on performance and its related activities on a SP for four long days did not let me see this.  



What Am I Saying Here?


While I have tested for DBs and ETL systems for years, I did not use my learning here.  What is that learning?
When there is a change in any part of the ETL, SP or DB of a system, testing for the functionality for the business workflow is equally important.  Vary the data dimensions and evaluate the counts.

I was completely hooked into the execution time and the test data while switching between the environments for four days.  The chaos in data between environments is something that misleads easily.  I fell to it this time.

I say to myself, if it is a fix for the performance optimization or a security [or any quality criteria], testing for functionality is equally important and of priority as running the tests for performance or security.

When a DB layer is picked for fixing and optimization, testing for functionality in a equal scale is must.  There is a change in the code or/and infrastructure and it has to be noted with additional attention.

To add on this, this time, I did not go through and analyze the SP.  I took this call from the test team.  This call of me costed and had a major part in letting me not to think of functionality.

My fellow colleague ran a test with varying data size by completing the business workflow and observed the problem, and informed me.  I give the credit to Sandeep.

If I had brought this performance test under the automation, I would not have done this.  Why?  I will evaluate and assert for each data returned for different sizes.  I did not automate here and there was no need for it in this context.

Redefine the testing objective that you have got; it helps when you see the model of a system and test.


Respect all the fix and suspect all the fix.  This helps in a longer run!  



Thursday, July 21, 2022

Dealing with the Fallacies of a Fallacy

 

One of my mentees asked me to help in identifying and understanding the fallacies in Software Testing.  I did not know the context in which the help was sought.  All I got is, on reading the book from Gerald M. Weinberg, the mentee wanted to understand and know the testing fallacies better and in simple terms.  For "fallacy", I understand it as -- a misconception resulting from incorrect reasoning and a false belief.  

Further, I learn that reasoning and belief are also heuristics. Can the heuristic be a fallacy? I see, the heuristic can be a fallacy.


The Reality of the Fallacy is a Fallacy

I will keep this blog post layered and oriented with technical lines so that it becomes easy for anyone in tech to understand my thoughts.  As I write this, I get hit by this question -- "Fallacy is a Fallacy?".  With that, I'm left with a successive question -- Fallacy is a Fallacy? Is that not a logical question? 

When I mean logical, I understand logic is one of the aspects of rational, scientific, and systematic analysis.  The analysis has limitations, knowns, and unknowns.  Further, this is super covered by a meta context which includes the uncertainty -- we are aware of and not aware of in our analysis.

When I write this, I see the word "meta context" in my mind.  I don't know if someone has used it earlier.  I presume, someone should have definitely used it when talking about engineering and systematic rational analysis. 

When we work on an engineering problem, we work with a context.  In that context, we learn 

  • the problem, 
  • need (requirement), 
  • assumptions we make, 
  • what we know, 
  • what we do not know, 
  • potential solutions, 
  • approaches, 
  • execution, and more

The engineer in me says, there is a meta context for every context.  Doing engineering to the meta context is an over-engineering is what I understand.  

Engineering to a context, by solving the risks and problems which are identified in that context, is what we all are doing, today.  This is my observation!  An example of this is the software system that we are building and continuing to consistently develop to be updated for the need.  The software system we are building, testing, and deploying is bound to a context and not to the meta context.

In Software Engineering, we work on a context, and, that itself is huge engineering.  Eventually we start seeing the context in which we work as a meta context, while it is actually not.  This is one of the fallacies which we encounter and most times do not identify it.  You see?  Then how to think about the meta context which comprises the infinite contexts from which we have picked a context to engineer and solve?

Once we try and continue to be aware of meta context and what it has, we start to learn everything is a fallacy, including the fallacy.  That's enough philosophical from me.  But, that's the reality and fallacy, as well. 

That said, thinking is a fallacy.  We know that exhaustive testing is not possible.  Likewise, exhaustive thinking is not possible.  When one's thinking is not exhaustive and bounded, don't a fallacy exist there? 

One's scientific and logical thinking is modeled and sampled over a few models, space and dimensions.  The decision from this thinking, practice, and testing will have limitations and fallacies that are noticed and unnoticed.

If an organism can think, then that organism will undergo the influence of a fallacy.  And, the organism can learn to identify fallacy, if at all it understands -- I can be fooled no matter what.  That is one of the byproducts of testing -- knowing the few possible ways how one can get fooled.  And, we have no leisure and luxury to find "all the ways"; this bound brings in fallacies in one's belief, thinking, work and decision.  So I say we work in a context which is pulled out of a meta context.

I see this is the stem of fallacy; the fallacies get wired to our thought process and to the engineering we do. Our systematic and scientific interpretation accepts the fallacy as -- logical, and systematic, and claims the problem we're solving is solvable.  Note that, when I say solvable here, I mean, we can deal with it for the costs and value we get out of it.  By doing so, we handle and manage the fallacy to yield the value.


What Did I Read Just Now?

Well, what you read above are engineering philosophical thoughts of me.  Now, let me pull that to the Software Engineering and Software Test Engineering.

The software system or a hardware system or any system that we have built is an assumption.  We assume it works because we work to make it work.  And, we sense that it works because we adhere to the protocols which define these assumptions.

So that tells me, that anything and everything is built, and being built is an assumption and has protocols. And if anything is working, it is on assumptions.  If anything has failed to work, it is on our assumptions.  That infers me, that rational and systematic analysis is an investigated and experimented assumption.

These protocols and assumptions can blind us to fallacies and limits us to not identify the fallacy.  On witnessing an incident, the fallacy or the outcomes of a fallacy may get uncovered a bit.  That is what we do in the RCA -- Root Cause Analysis.  We do the RCA so that we learn the fallacy in which we got trapped.

On RCA for an incident, we will experience a similar or same problem again.  Why?  We think, that once we do the RCA and practice, we do not repeat the mistake -- this is a fallacy too.  We do a new mistake, which leads to another RCA.  Does that means, the RCA of an incident says not to fall for the same fallacy again but okay for another fallacy?


Managing Self with Fallacy

I too fail in identifying the fallacies.  I continue to prompt my thinking and analysis to see the obvious traps while I test and deliver the testing.  I do not identify all the fallacies in a context.  I will work to find the list of fallacies that brings the most cost in testing delivery and system development technically.

Here are a few questions that I ask myself each time in my test session and analysis:

  1. What are the five contexts where this is a problem or risk?
  2. What are the five contexts where this is not a problem or risk?
  3. What are the five ways where this looks to work as expected?
  4. What are the five ways where this does not work as expected?
  5. What are the five contexts that matters most about this system and I have missed to know them?
  6. In what contexts this bug is not a bug anymore? Why?
  7. In what contexts this will be a bug/problem/risk/cost? Why?
  8. What are the influencing factors and practices considered in making this decision? In what contexts do these factors and practice displace the value with the cost?
  9. What are the assumptions and beliefs that are driving my testing?  Whose assumptions and beliefs are they?
  10. Do I know that I can be fooled?
  11. Do I see any problem here?
  12. Do I see any value here?
  13. Do I see any cost here?
  14. What More Can I See Here?

Understanding and learning -- how my team and stakeholders attach the importance to the same information, helps me. This potentially hints me if they are under influence of any fallacies.  I learn, the context in which team members and stakeholders are also influencing the importance attached to the same informationSometimes, the team and stakeholders use the same word; but, I notice they have other meanings.

This has lead me to learn, it is not about being precise or not for first; it is about, having the ability to communicate and help each other to have the clarity in what is expected.  And, how to achieve this clarity considering the thought processes and beliefs that each stakeholders hold, is a must to understand.

To sum up, we cannot avoid ourself from the fallacy.  What is not a fallacy at present, it can and will be a fallacy in coming time.  The goal is to how we manage to identify and deal with the fallacy which is influencing us and our work.

There is no escape from the fallacies!


Note: The count of words "fallacies" and "fallacy" in this blog post is 47.



Sunday, April 11, 2021

Testing and Report: One Hour of Testing; I Failed, Not The Tests

 

This incident crosses my mind often.  Each time it crossed, I have said myself -- "How bad my testing was then! Not Again."  Today, I'm better.

I was the only tester on the floor that evening in Moolya.  It was around 8:10 PM and I was practicing after office hours.  Pradeep Soundararajan walked along with Sunil Kumar T, and asked: "Can you test this website and share a test report in an hour?"  I did collect the details I needed with the context.  I was surprised to see them as I did not hear voices for minutes while I was practicing.

I was supposed to test a website that had one page.  I did not see any dynamic content on the page and it was all static.  In an hour, what can I test for the context shared?  I listed my thoughts and ideas.  I did my tests and emailed the report to the customer and I remember, I copied it to Pradeep and Sunil.

The next day, I asked how's the report and good enough for context.  I was said, "It serves the context and purpose. Good!"

But today, when this work of me cross my mind, I get the feel of:

  • How bad my test coverage is in that testing!
  • How blind I was in my work that day though I did my thinking in work!
  • How shallow I was in the tests!
  • I must not repeat it again.
  • I failed during that one hour.

This failure of me has added values in me and it continues to add. The one-hour testing and the test report tell me I failed but not the tests. 

If I test the same web page today, I'm not the same tester.  I have progressed in my practice.  Yet, this incident reminds me of what I should not be doing.  Moolya has helped me in this journey.  I thank Sunil and Pradeep for picking me to test that one-page website.  If they had not picked me for this task, I would have missed learning that wakes up a tester in me.