Showing posts with label Needs Testing. Show all posts
Showing posts with label Needs Testing. Show all posts

Thursday, July 21, 2022

Dealing with the Fallacies of a Fallacy

 

One of my mentees asked me to help in identifying and understanding the fallacies in Software Testing.  I did not know the context in which the help was sought.  All I got is, on reading the book from Gerald M. Weinberg, the mentee wanted to understand and know the testing fallacies better and in simple terms.  For "fallacy", I understand it as -- a misconception resulting from incorrect reasoning and a false belief.  

Further, I learn that reasoning and belief are also heuristics. Can the heuristic be a fallacy? I see, the heuristic can be a fallacy.


The Reality of the Fallacy is a Fallacy

I will keep this blog post layered and oriented with technical lines so that it becomes easy for anyone in tech to understand my thoughts.  As I write this, I get hit by this question -- "Fallacy is a Fallacy?".  With that, I'm left with a successive question -- Fallacy is a Fallacy? Is that not a logical question? 

When I mean logical, I understand logic is one of the aspects of rational, scientific, and systematic analysis.  The analysis has limitations, knowns, and unknowns.  Further, this is super covered by a meta context which includes the uncertainty -- we are aware of and not aware of in our analysis.

When I write this, I see the word "meta context" in my mind.  I don't know if someone has used it earlier.  I presume, someone should have definitely used it when talking about engineering and systematic rational analysis. 

When we work on an engineering problem, we work with a context.  In that context, we learn 

  • the problem, 
  • need (requirement), 
  • assumptions we make, 
  • what we know, 
  • what we do not know, 
  • potential solutions, 
  • approaches, 
  • execution, and more

The engineer in me says, there is a meta context for every context.  Doing engineering to the meta context is an over-engineering is what I understand.  

Engineering to a context, by solving the risks and problems which are identified in that context, is what we all are doing, today.  This is my observation!  An example of this is the software system that we are building and continuing to consistently develop to be updated for the need.  The software system we are building, testing, and deploying is bound to a context and not to the meta context.

In Software Engineering, we work on a context, and, that itself is huge engineering.  Eventually we start seeing the context in which we work as a meta context, while it is actually not.  This is one of the fallacies which we encounter and most times do not identify it.  You see?  Then how to think about the meta context which comprises the infinite contexts from which we have picked a context to engineer and solve?

Once we try and continue to be aware of meta context and what it has, we start to learn everything is a fallacy, including the fallacy.  That's enough philosophical from me.  But, that's the reality and fallacy, as well. 

That said, thinking is a fallacy.  We know that exhaustive testing is not possible.  Likewise, exhaustive thinking is not possible.  When one's thinking is not exhaustive and bounded, don't a fallacy exist there? 

One's scientific and logical thinking is modeled and sampled over a few models, space and dimensions.  The decision from this thinking, practice, and testing will have limitations and fallacies that are noticed and unnoticed.

If an organism can think, then that organism will undergo the influence of a fallacy.  And, the organism can learn to identify fallacy, if at all it understands -- I can be fooled no matter what.  That is one of the byproducts of testing -- knowing the few possible ways how one can get fooled.  And, we have no leisure and luxury to find "all the ways"; this bound brings in fallacies in one's belief, thinking, work and decision.  So I say we work in a context which is pulled out of a meta context.

I see this is the stem of fallacy; the fallacies get wired to our thought process and to the engineering we do. Our systematic and scientific interpretation accepts the fallacy as -- logical, and systematic, and claims the problem we're solving is solvable.  Note that, when I say solvable here, I mean, we can deal with it for the costs and value we get out of it.  By doing so, we handle and manage the fallacy to yield the value.


What Did I Read Just Now?

Well, what you read above are engineering philosophical thoughts of me.  Now, let me pull that to the Software Engineering and Software Test Engineering.

The software system or a hardware system or any system that we have built is an assumption.  We assume it works because we work to make it work.  And, we sense that it works because we adhere to the protocols which define these assumptions.

So that tells me, that anything and everything is built, and being built is an assumption and has protocols. And if anything is working, it is on assumptions.  If anything has failed to work, it is on our assumptions.  That infers me, that rational and systematic analysis is an investigated and experimented assumption.

These protocols and assumptions can blind us to fallacies and limits us to not identify the fallacy.  On witnessing an incident, the fallacy or the outcomes of a fallacy may get uncovered a bit.  That is what we do in the RCA -- Root Cause Analysis.  We do the RCA so that we learn the fallacy in which we got trapped.

On RCA for an incident, we will experience a similar or same problem again.  Why?  We think, that once we do the RCA and practice, we do not repeat the mistake -- this is a fallacy too.  We do a new mistake, which leads to another RCA.  Does that means, the RCA of an incident says not to fall for the same fallacy again but okay for another fallacy?


Managing Self with Fallacy

I too fail in identifying the fallacies.  I continue to prompt my thinking and analysis to see the obvious traps while I test and deliver the testing.  I do not identify all the fallacies in a context.  I will work to find the list of fallacies that brings the most cost in testing delivery and system development technically.

Here are a few questions that I ask myself each time in my test session and analysis:

  1. What are the five contexts where this is a problem or risk?
  2. What are the five contexts where this is not a problem or risk?
  3. What are the five ways where this looks to work as expected?
  4. What are the five ways where this does not work as expected?
  5. What are the five contexts that matters most about this system and I have missed to know them?
  6. In what contexts this bug is not a bug anymore? Why?
  7. In what contexts this will be a bug/problem/risk/cost? Why?
  8. What are the influencing factors and practices considered in making this decision? In what contexts do these factors and practice displace the value with the cost?
  9. What are the assumptions and beliefs that are driving my testing?  Whose assumptions and beliefs are they?
  10. Do I know that I can be fooled?
  11. Do I see any problem here?
  12. Do I see any value here?
  13. Do I see any cost here?
  14. What More Can I See Here?

Understanding and learning -- how my team and stakeholders attach the importance to the same information, helps me. This potentially hints me if they are under influence of any fallacies.  I learn, the context in which team members and stakeholders are also influencing the importance attached to the same informationSometimes, the team and stakeholders use the same word; but, I notice they have other meanings.

This has lead me to learn, it is not about being precise or not for first; it is about, having the ability to communicate and help each other to have the clarity in what is expected.  And, how to achieve this clarity considering the thought processes and beliefs that each stakeholders hold, is a must to understand.

To sum up, we cannot avoid ourself from the fallacy.  What is not a fallacy at present, it can and will be a fallacy in coming time.  The goal is to how we manage to identify and deal with the fallacy which is influencing us and our work.

There is no escape from the fallacies!


Note: The count of words "fallacies" and "fallacy" in this blog post is 47.



Wednesday, August 28, 2019

App Crash! Testing around and inside the crash



In day and out, I come across testers, programmers, managers, and management having efforts on fixing all the crashes. Yes, all the crashes.  In a way I see, if the app did not crash, I will not know the areas that is not being handled well enough.  My testing focus areas will also have tasks noted in such areas to test and learn as much as possible. I do that task provided I can make/given time for it as it is unplanned task.



The common checks to handle crash!?

I learn, an exception if unhandled at runtime, it leads to crash.  There are multiple exception that app can witness which we never thought of during development.  In my initial days of testing, I was in assumption, if we can have -- null checks, index check, illegal argument check, and state check for an activity, we have handled most of the exceptions.  I learned, I'm wrong! How many checks can I write in the code.  I'm not a programmer by job.  I'm a tester. 

I see these checks are not enough and few more got added to my test strategies eventually -- race conditions, unexpected data, wrong data, environment factors, and many more. The collection of these checks is continuing to grow.  Do I cover all these possible crash inducer collections in my testing?  No, I can't and I won't have that luxury of time as well.  Technically, I will learn and prioritize what to use and when.



How the check looks in code, to me?

I write code for automation, which I need to assist my testing.  Here I did write such checks.  At a point of time, I saw, the automation code was full of checks.  Is that a right way?  Definitely not!  A professional and skilled programmer might not do that.  If a programmer has to have such check in each layers of the app architecture, will that sound good?  Personally as a tester, I will not design my tests that way.  As I'm not a programmer, I'm not aware of the pros and cons of doing so. At least I know that is not a better practice to have checks in each layer of architecture of the app.


By handling the exception in my automation code, I print stacktrace of exception.  But will I learn from it to be a better tester?  That's the question I have asked and continue to ask myself.  The exception fix I'm doing, is it stopping me from learning the problem which I have introduced by the actions I'm doing on the app? Is that exception blocking me from learning the underlying problem in my automation code and app?  If yes, then I have a fundamental problem which I have to work upon is my thought.



Why it crashes?

Why at all the app gets into crash? I learn, if the app gets into a state it was not designed for, it will and should crash. As a tester, I will have to learn this state (and such states) at the earliest when I experience crash and on reading the crash stacktrace. I will be happy and not make a fuss about it, if I see a crash for first.  

I learn what is the priority and impact of the crash?  Should I invest my time and test investigate further to provide as much as details to programmer?  Or should I report it with good enough details and continue my testing?  I will answer this question to me.  All I wish is, I don't want the user of the app to experience the crash.  If there is a crash, as a member from development team my intent is to keep it to minimal number having little or no impact.  I see the crash is a great source of learning about my work in the app.

I used to be fussy about crash years back be it on desktop applications, database, web applications and mobile apps.  Now, I have come to point, I love them and it is absolutely okay for an app to crash and it should crash.  What I do post crash in fixing, that tells the bigger story.  In my work, the crashes have made the app better because the team was serious about those crashes.



What on having the crash?

Should the user lose the data what she or he entered on experiencing the crash?  I personally, don't want this to happen for me.  If it happens, I will be annoyed!  That said, how to handle it?  That's something we will have to sit with the programmers and team, for discussion.  

At what point in app, encountering the crash, should we close the app and start over again. At what point in app, it is okay to note the crash and pull the stacktrace, and let continue the user using the app with data entered intact. At what point in app, I should not show UI in view on entering crash and what to show, then resume over safely from there on.  Personally I feel this is a team effort and not just a programmer's effort in making it happen.  



Few testing strategies hack to uncover crash

Here are few things I do and I ask my fellow testers to do when testing mobile apps:

  1. Using the test data which will check the data integrity in app at -- entry point, during processing and post processing.
  2. Identifying the states of the app and passing the invalid states in app at -- entry point, during processing and post processing.
  3. Identify the input which is not from a tester and user. Classify the input on which I don't have control. For example -- the incoming intent; the app responding to APIs (default values, entered values and processed values); the app receiving the response from APIs; the device state; app's activity life cycle state and data/state exchange, and many more as this.
  4. Depending on Android or iOS app, much more strategies can be narrowed down to be specific and work.  At the end, what time I'm left with in the test cycle and what do business want, directs me on what to do.


Debugging and Investigation skills

There are libraries which collects the crash on exception along with other details as -- device info, user info and network info.  I have been with programmers and had difficulty in reproducing the crash and experience it in development environment.  This said, are the logs enough to fix crash?  May be, so we handle the exception and continue the flow of app in runtime.  But did we solve the root problem that caused the crash?  No!  This is where, I feel, the skill of a tester comes in and it is very much needed.

This skill defines me what I'm as a tester and value I bring to the system.