Showing posts with label Test Model. Show all posts
Showing posts with label Test Model. Show all posts

Saturday, February 3, 2024

Database: Finding the Tables Having Specified Column Name

 

In today's pair testing session with a mentee, we were testing for Database I/O.  We were on PostgreSQL.  One of the questions a mentee had is,

How can I figure out the tables having this column name?

Running through every tables and exploring if the column being looked for is present or not, is time consuming.  It is not a approach to take as well.

I went through this when I started the ETL testing practice in 2011.

Here is the query that works on PostgreSQL to find table names which has specified column name.


Query:

select table_name, column_name
from Information_Schema.Columns
where table_catalog='database_name' and column_name like '%column_name%'


It is a better approach to know the precise column name and using the condition as -- column_name='EmployeeId'.


This query should work on MySQL and MSSQL Server.  If not working on MSSQL, need to look into the FROM and WHERE clauses if it is vendor specific.



Performance Testing - What to Know Before User Behavior and Traffic Pattern?

 

This blog post is in series of 100 Days of Skilled Testing.  I see, I do not have to pick every questions asked in this series.  I pick and share to which I see, I can add value.

The twelfth question from the season two of 100 Days of Skilled Testing, is:

What strategies do you use to simulate realistic user behavior and traffic patterns when conducting performance tests?

The twelfth question asked is vague and it needs to be refined for preciseness to pick it up and continue.


The Question and the Gap

I see the below are missing in the above asked question:

  1. What aspect of performance is under evaluation?
  2. What is the system that is being evaluated for a performance's aspect?
  3. What part of the system is being evaluated for a performance's aspect?
    • Queuing? Messaging? Database I/O? Memory? Space? CPU? Client Performance? Functional Module?
  4. Who are the users?  What are their personas?
  5. How and where the users are accessing the system?
  6. What is the context of users accessing this system?
  7. What is the geo location of users who are accessing this system?
  8. How long these users are connected by accessing this system?
  9. Are there any differences among these users in their roles and privileges in accessing this system?
  10. Can the user access system through multiple interfaces?
  11. Are you assuming the user is on web browser and mobile apps to access this system?
  12. Is this system you are referring to, is a software system? Or any other system that is controlled environment like - access door, elevators, etc. ?
  13. You are asking to simulate the user behavior and traffic pattern.  Should I assume, I and you know or agree to any volume of user?  And, all these users are here for the same purpose when accessing the system?
  14. Are you considering any time or at a particular time when talking about the traffic pattern?
  15. Are there any unrealistic users who is accessing your system?  You say 'realistic user'.
    • Do you see that bots and non-human are also allowed as a user in your traffic?
  16. Have you evaluated this earlier in your system?
    • If yes, do you have the history and data for user behavior and traffic pattern?
    • If you don't have, do you allow to use or have your competitor's user behavior and traffic pattern data? 
  17. What is the tech stack of your system?
    • What part of your tech stack, you want to evaluate with this user behavior and traffic pattern?
  18. What is the architecture of your system?
  19. What part of your system and its architecture is being evaluated with this user behavior and traffic pattern?
  20. Are you running this exercise for the first time?  If not, where I can refer to previous exercises?
  21. How the interaction and events are handled from its start to completion?
    • What all are needed to complete the transaction in work flow?
    • How this transaction can go invalid for lack or incorrect data, state and action?
  22. What is spike, drop, saturation, expected, unexpected, and average numbers in the traffic coming in?
  23. What do you understand by traffic?  Do you mean number of requests coming in?
    • Do you mean the being committed I/O operations?
    • Do you mean the response received at the other end?
    • What is the definition of 'traffic' in this context?
  24. What is that you want to study and evaluate by the User Behavior and Traffic Pattern information gathered in this context?

Using the above questions, I will get an idea to proceed.

I will build a model from information I collect using above asked questions.  This model we will used to further in testing for a performance's aspect.  The value added to the performance test depends on this model as well.  To get a better model in context, it is useful to address the gaps.  From here, I start to think further.



What do you ask and look for when building a model for User Behavior and Traffic Pattern?



Performance Testing - The Unusual Ignorance in Practice & Culture

 

I'm continuing to share my experiences and learning for100 Days of Skilled Testing series.  I want to keep it short and as a mini blog posts.  If you see, the detailed insights and conversations needed, let us get in touch.


The ninth question from season two of  100 Days of Skilled Testing is

What are some common mistakes you see people making while doing performance testing?  How do they avoid it?


Mistakes or Ignorance?

It is mistake when I do an action though I'm aware that it is not right in the context.

I do not want to label what I share in this blog post as mistake.  But, I call it as ignorance despite having or not having the awareness, and the experience.

The ignorance said here is not just tied to the SDLC.  It is also tied to the organization's practice and culture that can create problems.

To this blog post's context, I categorize the ignorance in these categories -- Practitioner and Organization.

  1. Practitioner's ignorance
    • Not understanding the performance, performance engineering, and performance testing
      • When said performance testing, taking it as - "It is load testing"
      • No awareness on what is performance and performance engineering
        • Going to the tools immediately to solve the problem while not knowing what is the performance problem statement
      • Be it web, API, mobile or anything,
        • Going to one tool or tools and running tests
      • No much thinking on how to design the tests in the performance testing being done
      • Ignoring Math and Statistics, and its importance in Performance analysis
      • No idea on the system's architecture, and how it works
        • Why it is the way it is?
      • The idea of end-to-end is extended and used in testing for performance and having hard time to understand and interpret the performance data
        • How many end-to-end your tests have identified?
        • Can we test for performance to all these identified and unidentified end-to-end?
      • Relying on the resource/content in internet and applying or using it in one's context without understanding it
      • No idea on the tech stack and how to utilize the testability offered by it in evaluating the performance
      • Not using or asking for testability
      • Getting hung to most spoken and discussed 2 or 3 tools on the internet
      • Applying tools and calling out it as performance testing
      • No attempting to understand the infrastructure and resources
        • How it impacts and influences the performance evaluation and its data
      • Idea on Saturation of resources
        • Thinking it as a problem
        • Thinking it as not a problem
      • Not working to identify where will be the next bottleneck when solving a current bottleneck
      • What to measure?
      • How to measure?
      • When to measure?
      • What to look when measuring?
      • Not understanding the OS, Hardware resources, Tech Stacks, Libraries, Frameworks, Programming Language, CPU & Cores, Network, Orchestration, and more
      • Not knowing the tool and what it offers
        • I learn the tool everyday; today, it is not the same to me compared to yesterday
          • I discover something new that I was not aware of what it offered and exist
          • I learn the new ways of using the tool in different approaches
      • No story in the report with information/image that is self-describable to most who reads it
      • And, more; but the above said resonates with most of us
  2. Organization's ignorance
    • At the org level, for first and to start, it is ignorance in Performance Engineering
      • Ignoring the practice of performance engineering in what is built and deployed
      • Thinking and advocating, increasing the hardware resources will increase and better the performance
        • In fact, it will deteriorate over a period of time no matter how much the resources are scaled up and added
      • Ignoring the performance evaluation and its presence in CI-CD pipeline
      • The performance tests on CI-CD pipeline should not take beyond few minutes
        • What is that "few minutes"?
      • Not prioritizing the importance of having the requirements for Performance Engineering

Recently, I was asked a question - How to evaluate the login performance of a mobile app using a tool "x"?

In another case, I see, a controller having all HTTP requests made when using web browser. Running these requests and trying to learn the numbers using a tool.


I do not say this is wrong way of doing.  That is a start.

But, we should NOT stay here thinking this is a performance engineering and that is how to run tests for learning a performance aspect[s].


To end, the performance is not just - how [why, when, what, where] fast or slow?  If that is your definition, you are not wrong!  That is a start and good for start; but, do not stick on to it alone and call performance.   It is capability.  It is about getting what I want in the way I have been promised and I expect; this is contextual, subjective and relative.  The capability leads to an experience.  What is that experience experienced?

Sometimes, serving the requests by what you call as slow, is a performance.  What is slow, here?

The words fast and slow are subjective, contextual and relative.  It is one small part of performance engineering.

That said, let me know, what have you been ignoring and unaware in practice of Performance Engineering & Testing?


Monday, January 22, 2024

RAAMA: My Test Discovery Model

 

RAAMA -- I Look at You Everyday!


I have tried to put up one of my Test Discovery models in a conceptual way here with name RAAMA - Refer to, Arrange, Action, Monitor, and Assert.

Maybe this model helps you and your test engineering team as it is helping me.  Use this to your context with addition or subtraction for what you are seeking.

I refer to this RAAMA of me everyday and when I'm testing.  I'm finding the new learning and realization everyday that I was unaware earlier.  My understanding of RAAMA is not same what I had on the previous day.

My understanding of this RAAMA is incomplete and I have made PeACE with it by accepting it.  My understanding is growing and getting better everyday.  I will share a better version of it as I experience it.

Each time I look up to RAAMA and refer to it, I see a new dimension to RAAMA.  The awareness, exposure, and the questions are getting better giving the better realization of what I was ignorant and unaware.  The RAAMA is exposing me to be a better test engineer today than what I was earlier.



RAAMA - I Look at You Everyday!





RAAMA - One of my evolving models for Test Discovery


Note: I have not explained in detail what I mean for each node and its sub-nodes.  I can talk and discuss it with you if you look for it; I'm just one email away to get started.



Sunday, November 19, 2023

Waterfall or Agile: Testing for Performance - Where to Start?

 

Do you understand the Agile?  I have shared my understanding here; give it a read.

The eighth question from season two of 100 Days of Skilled Testing is:

Can you share some best practices for conducting performance tests within an Agile development environment?


Best Practices and the Agile


The irony is, the Agile says, there is no best practice.  It asks, to tailor and fit the practice to the context so the continuous delivery and value is delivered consistently upholding the Agile's principles.  

Yet, we talk about the best practices in the Agile's context, like the eighth question asked here.

What is the effective way to test in the continuous delivery?

As a test engineer, how can I start thinking and testing for performance from the inception of a feature's thought?  I see, it is not hard to do so.  As you read further in this post, you will have a perspective and awareness to do it.


Performance in Waterfall and Agile

I learn, the performance is an experience.  It does not differ because of the Waterfall or Agile.  If the performance is not a pleasing experience, it will impact stakeholders no matter it is Waterfall or Agile.

But, the question when evaluating for the performance is -- where to start, when to start, how to start, and with what to start?

As of today, I do not see differences in the mindset and skills one has to have for testing of performance in Waterfall and Agile.  Could be the approach differs in certain phases here; otherwise, I see the same in both practices.

I will rephrase the eighth question to this:
What is your practice to evaluate the performance right from the start of product development in your project?
I do not want to wait until to hear -- the development is completed and deployed; now we can start running the performance tests.

What can I do as part of performance tests from the first day of development and first commit?  This is my intent and area to look in strategizing the testing and tests.



The Culture of Engineering

At the start and end of the day, when we developers start and finishes the work,

  • How the work is done and why, is defined by the engineering culture practiced by that organization.
    • The Performance Engineering of the software products and solution being built will be driven the by the culture practiced.

The Test Engineering and how we test and automate will be driven by the culture of engineering practiced in the organization.

Writing the code not just for building the functionality, but, also for performance is a culture driven factor.  The organization's culture for engineering practice drives it!



Testing for Performance - Where to Start?


I'm sharing my research work that I'm doing and experimenting on performance engineering and performance tests.  I'm seeing the results and value out of it and so are the stakeholders.

Today, we are getting skilled in exploring and testing without the requirement document and SLAs in hand.  Isn't it?  Haven't you?

I use my MVPT to figure out what are the minimum performance tests for the feature.  As part of this, I will explore with help available aids to evaluate the performance.

To start, I will use these questions to figure out the performance tests:
  • What is the minimum viable questioning performance tests that you have got to test this feature?
  • What is the minimum viable questioning performance tests that you have got to test this workflow?


Unit Tests for Time and Space Complexity


I will work closely with programmers to gather information on below when the code for the feature is committed as part of Unit Tests.
  • The execution time taken by the code of that feature - the Big O Notations for space and time complexity
    • Usually the Unit Tests focuses on functional tests and clean code practice
    • But, when we test team ask and push for performance data, this can come as part of Unit Tests
      • An architect or a principal engineer can set an expectation on
        • What should be the time and space complexity of a code for a feature?
          • Each functions and blocks need to be evaluated on this
          • As said earlier, this depends on a engineering practice culture of an organization
            • If the culture wants it, it will be there; else just the functional code will be delivered and not the performance code
      • If the time and space complexity analysis outcome is not as expected, the code written has to rethought and refactored
        • The review process need to put it back
        • The comment with data has to be published
          • This will be useful to model the performance tests by test engineers who will be working on performance tests
      • Doesn't it look a like a effective useful practice as part of Performance Engineering right in the early stage?
        • This is very well applicable to projects running on Agile or Waterfall

Do you have this in your project and Unit Tests written?

The time and space complexity questions should not be confined just to the SDETs [test engineer] interview.  A test engineer has to ask for it and apply it in her or his day-to-day work.


Profiling Tests by Test Engineers


We testers do not get into product's code analysis.  We have to build skill to run the profiling on product's code and analyze the resources data.
  • Test Engineers can test the feature's code with the help of IDE's profiling (runtime analysis) and collect the performance data by identifying the performance bottlenecks
    • This runtime analysis can profile for
      • Memory snapshots
      • Thread analysis
      • Monitoring resources
      • CPU and allocation profiling
      • And, more
      • The problems and risks can be reported upon analysis
    • Compare the two different solution's approach performance data
This information will tell and indicate where is the risk and problem when we deploy the code.  In my opinion, this is a useful information in modeling further performance tests.  This information is first-hand information which is very powerful before we start using any other performance testing strategies and tools to aid the tests.



Get Started with Performance Engineering and Tests


These are available in the IDE.  We think of performance testing tools and ask how to test for performance.  To be precise, we test developers (test engineers) should change our mind and shift for first.  If not, as I say, we will be the bottleneck for first to ourselves.  Did you know this way of testing for performance?  Why not you introduce this in your project and organization?

If seen, these test practices can be used right from the day we commit the feature's code. This is a place to start for the performance tests.   This will be a differentiator together with MVPT and guides the MVPT to design effective performance tests in the context.

I do not say these are best practices and there is no best practices.  But this is a useful practice when the organization and stakeholders ask for it.  Let your organization and stakeholders know how well you can test for performance right from first commit of product's code.

To stop and end here,
  1. Just do not test for functionality from day one, also test for the performance from the day one.
  2. Influence your organization's engineering culture and developers not just for developing functional code, but, also for the performance code




MVQT: The Testing and Tests with a MVP's Perspectives


I was leading multiple teams and its delivery in a testing service company.  Then, I came up with this thought -- Like MVP, I also have the MVT (Minimum Viable Tests) for a MVP.

Further, I expanded this thought in my day-to-day practice on tailoring to different contexts. I'm observing that it is applying well to the different contexts when I tailor it to the contexts.  After experimenting it for 10 years, I'm sharing this as a blog post.


What is a MVP?

I take this from Eric Ries.  It looks simple and precise to me.

The Minimum Viable Product is that version of a new product which allows a team to collect the maximum amount of validated learning about customers with the least effort.

I see this technique [and a concept] can be applied to anything when I'm developing.  As a test engineer, I develop the tests and test code in major as part of my testing.  On applying the idea of MVP to my testing and deliveries, I see the value and result.

Reading this blog post of me to know who is a developer.


Testing, Tests, MVP and MVQT

In software test engineering, I see the MVP as Minimum Viable Questioning Tests.


The Minimum Viable 'Q' Tests (MVQT) for a focused area of a feature [or to a feature]

  • Helps me to identify the priority tests that should be executed for first
  • Allows me to learn information on priority which matters critically to product and stakeholders
    • So that a informed decision can be made.


The Q in MVQT stands for "questioning".  I read it as Minimum Viable Questioning Tests.  I see the "Q" as a placeholder for the Quality Criteria.  That is, MVFT means Minimum Viable Questioning Functional Tests to a feature or a workflow.




The MVQT are key to know:

  • Have I identified and designed the priority tests?  How do I know that I have got them?
  • Did stakeholders get the information which they wanted to know on priority?
  • Did MVQT help me to
    • Explore and know what I wanted to know about a feature or a workflow?
      • How fast I was here to know and learn this?
      • How did I develop my tests incrementally?  Did I?  If not, then, is it a MVQT?
  • Did MVQT help me to know
    • Am I aligned and in sync with expectations of my stakeholders and customers who are using the software product I'm testing and automating?
  • Did the MVQT help me 
    • In collecting the critical information in a given context for the scope of testing and automation?
    • Do the learning and outcome from this MVQT help to reinforce the validated learning of customer?
  • Do MVQT result support the outcome of Unit Testing result?

The tests in MVQT has to be consistently revised and evaluated to keep it as a MVQT.  Note this, not all tests are MVQT.  If the number of MVQT is growing to a part of feature or to a feature, it is time to think about what is MVQT for you.

The "minimum" tests are highly effective and it helps me learn and test better technically and socially.



MVQT and Testing

  • Sanity or Smoke Tests
    • The set of MVQT which helps me learn can the build be taken further testing
  • MVFT - Minimum Viable Questioning Functional Tests
    • Apply this to a feature or a workflow or to that part which can be evaluated with minimum tests for its functionality
      • To update is this aligning to the validated learning of customer [stakeholders]
  • MVPT - Minimum Viable Questioning Performance Tests
  • MVUT - Minimum Viable Questioning Usability Tests
  • MVAT - Minimum Viable Questioning Accessibility Tests
  • MVTxT - Minimum Viable Questioning Tester's Experience Tests
  • MVST - Minimum Viable Questioning Security Tests
  • MVAF - Minimum Viable Questioning Automation to a Feature
  • MVLT - Minimum Viable Questioning Localization Tests
  • MVUIT - Minimum Viable Questioning UI Tests

You add more of this to your list and context.

In a way, MVQT should ask and look for the testability, automatability and observability.  If this is not happening, then there is no possibility of saying I have got my MVQT.

More importantly, in the CI-CD ecosystem, MVQT pays a major role.  If I should have my tests in the  CI-CD pipeline, then, the MVQT is the way and it focuses on a targeted area to evaluate it.  Else, it is hard, impractical and not possible to test in CI-CD eco system by delivering continuously.


Ask and Review for MVQT

Ask for MVQT, when you review these:

  • test strategy, test framing, test design, test ideas, test cases, test plan, test architecture, test engineering, testing center of excellence, and test code

For example,

  • What is the minimum viable questioning performance tests that you have got to test this feature?
  • What is the minimum viable questioning performance tests that you have got to test this workflow?
  • What is the minimum viable questioning security tests that you have got to test this feature?
  • What is the minimum viable questioning GUI tests that you have got to test this feature?
  • What is the minimum viable questioning contract tests that you have got to test this end point?
Likewise, What is the minimum viable questioning automation tests that you have got to test this feature?

Ask how these tests qualify as MVQT in this context of testing and automation?

This should help you to see how effective is the test strategy in a given context.

Importantly, the MVQT and its effectiveness is a testability to test your tests.



The Credit is to Me

I'm not sure if the idea what I'm saying here in this blog post is practiced by other test engineers.  I have not seen this being discussed about it in public forum.  I have not come across it in my awareness and to the exposure I have put myself.

Hence, I will take this credit to me.  Giving the credit honestly is not a common sight and practice.  I have not got my due credits for using the ideas, thoughts and work that I have come up with.

So, I make it as a open letter and call out that credit for this idea, thought, concept, and practice will be to me when you listen, use and practice it.



Wednesday, October 18, 2023

What are KPIs and Metrics?

 

I use to have this question - Are KPIs and Metrics the same?  Especially when I started to learn and practice the Performance Testing, this question bounced back to me often.

Do you have this question?


The Use of KPIs and Metrics

When business and stakeholders talk about it so much, there should be a value out of it.  What is that value?  Why it is important to identify and capture the KPIs and Metrics?

The KPIs and Metrics are derived from data we collect.  These data are processed to extract and normalize, so that, it is in a state as expected by the consumer for making a decision.

The stakeholders will make decisions and take actions referring to KPIs and Metrics. For example,

  • The number of Daily Active User (DAU) is a KPI and also a metric.
    • But, a metric cannot be a KPI
  • How many of this DAU, closed the transaction within five seconds using a wallet?
    • It is a metric; not a KPI.

Another example,
  • KPI
    • How many users installed the latest version of the mobile app and have signed in?
      • If there no active users on latest version that indicates a kind of risk and problems to the business.
    • Reopened Tickets in Customer Care
      • This indicates there is something going wrong
  • Metric
    • What is the average time taken to see a streaming screen for users in 4G data network?
      • If this is not captured, there is no data for business to establish a relationship with KPI set.
    • Average Reopened Tickets in a customer service
      • The distribution and time towards lower number
      • The distribution and time towards higher number

KPIs and Metrics are not the same while both have quantitative measurement. Both are different.  Identifying and knowing the difference between them in your context is important.

They go hand-in-hand when setting a direction and action.  So that, the business and stakeholders realign to the goals and objectives defined.



KPIs vs Metrics






To conclude this post, investigate your metrics and question why the chosen KPIs.  It will help you to design your Test Models and identify the tests in given context.




Tuesday, October 17, 2023

Software Engineering - The Unquestioned Understanding of Client in Testing


Client - The Unquestioned Understanding

When asked or said "client" or "client-side", most of us assume or take it as:

  • Web page displayed in the browser
  • Mobile apps - Android and iOS apps
  • Desktop applications

I see this is one of the unquestioned understanding and assumption in the Software Engineering.  While it is not wrong, the client does not always mean -- a web page displayed on browser, mobile apps, desktop applications, a terminal, etc.

The client is one of the subjects which we have not attempted enough to understand in Software Testing & Engineering.

A client is one who consumes the service in form of a response, and then does what it has to do.



Client - The Contextual Entity


The entity which takes the a client place [role] is entirely based on the context.  That web page displayed on a browser is a client per some model.  So is the mobile apps.  A service looking for data from Redis is a client too.

The client can be within the backend system which requests another entity to process its request and awaits for the response.  This client is not always a mobile app, web page on a browser, or a terminal where I'm working with commands.  Do you see that?

Next time when you hear the word client, ask for the context and know who is the client that is being discussed.



Client's Awareness in Performance Tests


By now, you should be breaking your assumption and the myths around the word "client".

In testing for Performance, it is critical to be aware who is client, when and how?  Evaluating this client will not be like evaluating a web page on a browser or the mobile apps.

The fifth question from season two of 100 Days of Skilled Testing, is:

How does client-side performance testing contrast with server-side performance testing, particularly in their objectives and area of emphasis?

Hope now you will question when said client-side performance,

  • What client are we talking here?
  • Where do this client sit in the system's architecture.

Based on this information,

  • How the tests for performance is approached and executed for a client, differs.
  • What is collected and observed to evaluate the performance of a client, differs.



This blog post is not to illustrate the different clients and how evaluate it for performance.  When the question talks about a particular client and in a context, I will share the approach, and how to evaluate the same.

To end, have we explored the clientless interface?  How to test this interface?

  

Saturday, March 25, 2023

Black Box in Every Other Box of Software Testing

 

Modeling Software Testing With Boxes


The fact is something that is not put to scrutiny or questioned much and often.

As a fact, the Software Testing is explained to us using boxes.  That is,

  • Black Box
  • White Box
  • Grey Box

Is this wrong? No, it is not wrong.

There was a need to explain for one how to visualize -- how a person would interact and interpret the software system when testing.  The analogy of these boxes helped and helps.  These boxes are mindsets.  In a way, these boxes are like models to interpret the different ways and approaches we take in Software Testing.

I see, we are seeing Black Box in every other box.  Maybe, this is limiting one not to think to learn software testing in other than a black box mindset.

If you ask, are we not automating? Is not that a Grey Box?  Very much, we are a black box mindset when writing automation as well.  I include myself here.  I'm exploring how to break out of this and see the Software Testing.

Do programmers think of their code as different boxes?

  • A programmer reads, writes, deletes, and views the data. events and state
  • The programmer as well cannot see what's happening between the binaries on the electric circuits
  • The programmer evaluates her/his code's testing via logs, debugger, and assertion for the data, state, and events
    • Is this a white layer or box?
      • Is it called white because one can see through the logs, debugger, and assertion?
        • But that is still not a sight of binaries on the electric circuit, right?
        • If one could see through the binaries, we should not be having race conditions, out-of-memory, and unhandled exceptions
          • Isn't so?

This makes me feel, there is a Black Box in every box and we are largely confined to this Black Box.  

Exploring to step outside this box helps to understand the testing, software testing, system [and software] under test, and what is needed to test better.


Monday, February 14, 2022

Model, Oracle, and Perceived Quality

 

These words are one of the most repetitive words in my blog and also in the testing community's discussion.  Nevertheless, I write a blog post on it and share my interpretation here on the understanding of it.  I came across a discussion on these words in one of the testing communities. Here is how I see it as of today with my practice of Software Test Engineering.


Model


The "Model" can be seen as a representation.  In Software Engineering, we use models as a reference to build and develop the product.  Software Testing can be leveraged when we use the models to design, build, execute and interpret the tests.  How I build the models or see models can vary each time in my work.

Today, as I write this blog post, here is how I understand a model:

  • Model is how I'm understanding
  • Model is what I'm understanding
  • Model is what I have understood
  • Model is what I have to understand
  • Model is also what I have not understood and what I am not aware
    • But, this would not be included as a part of the picture or written in the model most times
      • We tend to see the model as a working object (always?)
      • In the working object, having something not understood and not aware is not a common practice
  • A model can be a non-diagram and the organized documents or words
    • I look at The Constitution of India as a model on which I live in the Indian society

Usually, one looks for a diagram on hearing the word "mode".  I did this, that is I looked for the diagram and continue to do so.  But the model need not be a diagram always.  But, the diagram helps better in relating and understanding.

Example of models:
  • A representation understanding of me how I can broadcast YouTube live stream on Zoom
    • Here I can have multiple models
      • From a tech layer spanning to a UI layer of Zoom and then to people who are watching it on Zoom, YouTube or both
      • And, many more models like this
    • If I'm on a sales team, my model thinking on the same would be different
      • Like which platform that is YouTube or Zoom caters the content to the maximum audience I'm focusing on



Oracle


In a context-free case, I will say Oracle as -- It is a reference which I refer to learn and interpret what I'm experiencing or about to experience.  It helps me to understand and in decision making.

That said, the Oracle is not a source of truth.  It is a reference and so it is a heuristic.  If it is a source of truth, can it be a heuristic?  This is the challenge and confusion one goes through in understanding and drawing the relationship between the oracle and heuristic.  When we understand oracle and heuristic, it is simple to draw the relationship between them and know when it can serve as the other.

When heuristic is taken as a source of truth, it can fail to be a source of truth at any time.  Heuristic is a fallible way of solving a problem or making a decision.  That said, not all heuristics are oracles; and, the oracles can be used as heuristics.

In software testing context, Oracle is quoted as -- An oracle is a way to identify the problem that we experience during the testing.  It helps to identify the problem.  Maybe we use the word "problem" as we are into the context of Testing.  Testing is expected to figure out for what it is commissioned; in most cases, we take it is to find the problems.  So that definition or quote for the word Oracle in software testing context has the word "problem" with it.


Example:
The 1000 INR currency note that we had in India, was a valid and acceptable currency.  If someone asked before demonization this was accepted as truth.  But today, it is not a valid currency and this is accepted as a truth.


The 1000 INR currency note was a valid currency until 8th Nov 2016.  This currency was used in daily life transactions by people.  This is a heuristic and as well oracle.  

Today, a 1000 INR currency note is not a valid currency.  This is an oracle and as well a heuristic.  

If I go with 1000 INR currency it will not be accepted in transactions.  People will identify it as a problem if this currency note is exchanged in a transaction.  

  • Oracle: 1000 INR currency is not valid and accepted in transactions at the time of writing this post.  So not to tender 1000 INR currency in the transaction.
    • This gives an example of an oracle as a heuristic
  • Heuristic: Use the other available valid currency in a transaction.  Is there any Indian currency note that is invalid today?  What are the valid currency notes that I can use today?
    • This gives an example of heuristic as an oracle


Perceived Quality


  • We experience and experience is an emotion
  • Quality is an emotion and an experience
  • How I perceive the emotion and quality from testing or by any events/actions might not be the same to another person
    • But it is still valid and authentic because that is what I perceive and experience
Coming to the text "Perceived Quality", I have two questions:
  1. How do I perceive the quality as a tester?
  2. How the business and stakeholders perceive the quality by referring to feedback from testing?
Whose opinion and perception of quality matters?

Here is my thought for now on this:
  • As a Test Engineer, I try to provide information from my testing
    • The information with a compelling and influencing story of my testing
      • The outcome of my testing
      • The potential consequences of the outcome as I perceive
  • I have my words to share about my emotion and experience
  • Any measures to be taken and the authority to change the direction in this regard are with stakeholders and the business
    • What would stakeholders and the business perceive about quality from my testing story
      • The outcome from this perception is crucial in the larger interest than what I perceive as a Test Egnineer
The compelling and influcing way of telling the testing story is important.  The perception what the stakeholders and business make from this story, is what they see as quality in the first sight.

I say that word "perceived quality" talks about how the stakeholders and business is perceiving the quality.  And, how as a Test Engineer, I'm influencing it with my advocacy.


Example:

Let us take a scenario where the Product Owner and Sales team are looking forward to data for making the decision. 

The product owner has a good feel about how the notification works. The notifictions are received by the intended interfaces at the time of testing. 

But, did it record in analytics about how many taps were made on notification?  No, it did not record.  The Product Owner together with the sales team needs this data to plan the business decisions.

The lack of this data will the lower quality experience of notification for the Product Owner and Sales team?  

What is the quality emotion and experience perceived here by two different teams and people?




Sunday, December 26, 2021

Before Identifying and Listing My Tests

 

I read the below query in TTC's Telegram chat. The discussion had started on this thread and fellow members here were responding.  Further, I read this line and it made me look into it -- "The question was we have to use valid username and password..and perform a negative testcase".



The Default Thinking and Applying Interface

Including me, I see it is subconsciously common for us to approach the problem statement visualization in terms of Graphical User Interface.  When I ask why it is so, maybe it is rooted in our subconscious thinking i.e. with first order and second order or any orders of thinking.

I want to give a try to attempt approaching it by reminding and asking self the below questions:

  1. Is it a GUI specific problem?
  2. Is it a problem that is tied to the context of GUI?
  3. What does this question encapsulate within and open as an interface?
  4. What forms do these interfaces take when I stand out of specific interface?
  5. Should I stick to one interface to learn and attempt this problem?


Identify the Tests and Framing of Tests

We test to learn

  • Does the system do what it is supposed to do and how, why, and when?
  • When the system does not do what it is supposed to do and how, why, and when?
Should I call it Negative Tests?  This is not what I share in this post.

To me, these are tests that help me to learn when the system responds and behaves in the other way than I expected.

I can start to identify the straight use cases for inputting an error (a human introduced error) at a given state/data/event; then look for the behavior of the system.  It is good when we can keep identifying and ideating the use cases.  

We get limited with use cases as we continue to think about use cases.  That said, for sure we will identify and frame the tests within identified use cases.  But, we need tests that help to learn when the system fails in doing what is supposed to do.

To supplement it there is another way, which I use.  I do not say this is the only way to supplement.  I use multiple approaches to supplement and identify the tests.  When I do so, I ask the question to the system with the help of these tests and evaluate the response of the system.


Questions to Identify the Priority Tests


I learn and understand the system each time, to identify the better tests.  And, each time I learn something new about the system that I did not know.  

When I'm asked a question in the interview, I ask for details that help me to test better or to demonstrate my deliverable better.  I will watch the questions that I ask!

If I were the candidate who got this question in an interview, I would ask the below questions.  When I learn this is good enough for the initial tests, I will pause with questions.  I move to identify and frame the tests using the responses I got for the questions that I asked.  

These questions will surely help me to be precise and close to the context that better demonstrates my testing skills.  If it is not close, then there is a problem (or a difference) in my presenting and expectations in the interview.  I will have to address it with the help of the interviewer.

Questions:

  1. What is the interface where I'm entering the username and password?
    • Where is this authentication used?
    • On UI (if so which UI), or CLI, or touch interface, or what is its interface type?
    • At which layer of the system this authentication is used?
  2. Where is the format of username and password?
  3. What is used as Authorization identity on successful authentication?
    • What happens if my authentication is not successful in the UI you want me to test?
    • How do I understand that UI is communicating to me that my authentication is not successful?
  4. How is this authentication processed?
  5. Where the authentication is mapped to authorization and stored for references?
  6. What protocol is used to communicate in authentication?
    • What protocol and communication order is used to grant and revoke authorization?
  7. Who uses this authentication and authorization?
    • To know the different means of doing the same
  8. Is there any other form of authentication that grants me the authorization?
    • Do these different entry points of authentication update my authorization?
    • Will I have different authorization data to authenticate? If yes, how the data, states, and events are maintained for my authentication and account?
  9. What's the language and Unicode supported by this system?
    • Will the languages and Unicode used in the system have any impact when I try to authorize by changing the language and Unicode?  How does the system understand these differences and maintain one state of data with authorization?
  10. Are there any computing differences for authentication and authorization on big and small endian machines?  If yes, how and for what context of the system's behavior, processing, and decision?
  11. Where and how the authentication and authorization details are processed, stored, and presented back.
    • Is there any specific reason for doing it in this particular way?
    • How you have strengthened the authentication process to grant the authorization?
      • For example, 1FA, 2FA, nFA, what else?
  12. Does any other system use your authentication to authenticate and authorize?
  13. Do you use SSO for authentication and authorization?
  14. What testability layer do I have that I can make use of to support and identify the tests?
    • Does this testability layer help me to identify more tests and also classify them?

I can keep generating the questioning like this.  But I will have to pause and start working on what the questions offer me.  

With the help of these questions, I can learn better about the system before attempting to identify the test and frame it.  This also pulls out the risk or problem area if any that looks important and of priority.

I have eased my work to an extent when I know:
  • the target surface area to start my work 
  • what it takes and brings back, and how

In this context, I would have started this way!



Monday, October 5, 2020

Question on Quora: Debugging of "Login button not working"

 

I read a question in the Quora which said -- "We are manual testers and we are testing a login page, but the login button is not working. How can we debug that thing?". I found the phrase "not working" and that me attentive about it. The words as this make me curious and lead me to debug and learn. 


Also, I see this thought of me can help someone who is wanting to test in such cases. The phrase -- "... but the login button is not working." open up multiple possibilities of seeing what the person is saying, like:

  • The login functionality looks functional; but, the login button looks to be not functional?
  • How did one know about the broken login button?
  • More questions like this can cross the mind of a Test Engineer!

I have tried to put my thoughts in the below mindmap. I hope it will help someone who is looking for a start in similar context lines.





Friday, November 11, 2016

IME Test Model: Briefing the "RICH VIP MUST PLUG AND HE PUTS LOCK"


On posting this, I had not covered what this mnemonic means.  In this post, I will cover briefly about each factors that comprises this mnemonic. An IME app do not necessarily have to be an app always which will be installed on the mobile phone by the user. It can be a SDK which will be bundled in to the OEM as well.  Testing an IME app and testing the IME's SDK is similar? If wished to share your thoughts and ideas to program and test the same, please do share.


What is this mnemonic ?
RICH VIP MUST PLUG AND HE PUTS LOCK
Before briefing it out, I want to tell -- this will help a programmer, tester, product owner or anyone who is interested in building one such mobile application. It helps by being a pointer to provide an idea to categorize the development (programming + testing + segmenting the app to market & users) of IME. Thereby, it becomes easier to assess them technically as well for the context. I can derive the ideas to test the IME and IME's SDK technically from this test model and assist in the better development of it.

  1. Response 
  2. Interruption
  3. Content & Key Layouts
  4. HIG
  5. View
  6. Intent Search & Discover
  7. Permission
  8. Messages
  9. UI
  10. Settings
  11. Type of IME
  12. Platform & Specific Device
  13. Languages
  14. Users
  15. Gestures
  16. Action
  17. Notification
  18. Dictionary
  19. Help
  20. Error Handling
  21. Prediction
  22. Updates
  23. Trails
  24. Sound
  25. Locale
  26. Orientation
  27. Cursor
  28. Keys

Briefing the 28's of IME Test Model


In this section, I will convey few lines among what I have thought about each factors while I was identifying and picked it.  To make it clear, this cannot be treated solely as the test ideas and design ideas for programming; rather, it will be a pointer to start and influence in building the ideas to test and program.

  • Response -- Anywhere in the IME app and where it is used in conjunction with other apps, you have a thought how 'fast' or 'slow' is the response, then find the tests and know how to do it. Anyways, what is that I'm going to get by learning this? Know it!
  • Interruption -- Not the small one for sure. What is the interruption to you and what can be part of interruption in mobile technology? It is beyond the calls, messages and the other common words you and I have heard. How good and bad the IME app can be on the device if this is handled well? This can be a good start after knowing what is an interruption.
  • Content & Key Layouts -- It looks simple yet it is not the simplest by the way. If the IME is built for global users or specific to region, what is that I need to have in the content and key layouts?  Right from visual design to interaction design to data architecture to mapping it to the mobile hardware with locale selected, is for sure a interesting programming and testing. Think what's in your ideas to test and automate it.
  • HIG -- What do the platform's HIG guideline tell about design pattern, IME design and what you unlearn and learn about the design you were visualizing, seeing and wanting? Having the reference of this definitely helps when you want users to use the IME you are thinking, programming and testing.  Do you know if your platform allows to build one IME app? Is that right !?
  • View -- It is beyond the UI, if you thought it is the UI.  For an app programmer, it can hold her or his breath if the View is not handled. Know the different kinds of Views available in the IME and how to program for it to hold relevant data (re-read Content & Key Layouts) and process it by using it as input and output.
  • Intent Search & Discover -- The mobility is evolved and so the Artificial Intelligence with the chat bots and service API integration. Your IME has got this efficiency and feature or wanting to have it? Then know how to consider programming constraint and testing the same. It is no more just the app; it is with-in and from-outside-inside of the app together with AI. Discover your tests to test it.  Hey what's the version of mobile OS version you are targeting to support and build the IME ?
  • Permission -- What is the permission you are thinking about to grant as a user for the keypad on your mobile app? Do the permissions you want to grant and that you do not want to grant for IME app - consume your device's resources (why?), slows your interaction with the task you do in the device (why?), and puts you in state of embarrassment and annoying (why?) Then think about the permission for your IME you are building and testing. Wait, are you happy with the permission you ask from the user for using this IME morally, technically and ethically?
  • Messages -- What you get  in your thoughts when said the 'message' in an software mobile app? Ponder over and touch Ben Simo's FAILURE heuristic when required consistently. The message looks to be as UI part but it is more from the behind the UI to the UI.  What messages are programmed in the IME app and localized to suite the context of app usage with locale of device? Besides the known error, warning, and info, what is the kind of other message I do not get known often? Is this taken care in the IME app?
  • UI -- Not the mastering one for sure when it comes to app that supports different locale and language and several hardware configuration devices. Pattern is the key in keypad. The UI anywhere is not all about the visible UI aesthetics but it is more about the performance and fastness. How is the UI of your app is light and yet straight in delivering optimized expectation? What should be part of programming such UI? What are the tests I should be figuring out and sample to know the UI of the IME app?
  • Settings -- How the setting changes of IME add to the factors that handles the device's service and intent? Besides the Settings functionality of the IME app or IME SDK, what I have to consider in programming and testing of it?
  • Type of IME -- What's your IME type and how do you input the data? What are the different option it supports to provide the input to mobile device and apps?  How about an SDK here, forget about the IME mobile app. Now list out that should be handled in programming it and how to test the same. Is there a pattern in how the input is provided via IME?
  • Platform & Specific Device -- How's the IME built specific to platform? How different can be the behavior of IME when there is a behavior specific to device and its software? How to program for such cases in handling behavior of IME? How to isolate, debug and test such behavior?
  • Languages -- Language which IME claim to support, do device support that language? How the language's alphabet characters are rendered and shown? Does the construction of word in language looks similar to as and when human does it? How the mapping of keypad keys happen to language's alphabet character, symbol and other characters? UTF, the common word; but is that has to be considered when programming and testing here?
  • Users -- I want to go beyond these - User Personas, User's Device and Accessibility. Not that I want to ignore them; I see there is much more than the above said. How can I identify and capture them in my test?  What is that specific support a user will get from the platform when providing the input, especially via IME? Okay, let me bring this thought here -- most say, they scaled and built the technology and app/product for customer engagement, do this app have that technology segment? This is just one thought. As this, I have much more where I can identify, derive my tests and see which are suitable to automate among them. Just to trigger a chained thought from the previous one, how about A/B strategy to read the user's priorities and mindset while using the IME? Then connecting it to the User Customized IME. How to test it and to have it in a app?
  • Gestures -- Do you have list of gestures available and supported on your device and OS? As usual, the Gesture will be used by IME users. But what is that comes with gesture as a cost? How does that impact in device fragmentation space? How the gesture which you have set works in case bilingual and native keypad? Customized gestures do exist and how will I intimated when there is a confusion in gesture recognition by the app and device? To spice up a thought here, just think -- how the gestures make differences when the IME is used with a mobile app to input and when it is used to input on a mobile web as well. Any new gestures available via this IME which is unavailable till date? How to approach testing of gesture on different types of touch and display framework of platform if I'm bundling the IME SDK with OEM?
  • Action -- The action keys available in the keypad, how do they get localized as and when the language is chosen? Also how it reflects when the locale of the device is changed.  Besides the functionality of such action keys, what can I consider in programming and testing of such keys? One example at UI level is, what should be the direction of arrows in action keys when it is being used with right to left language?  That was at UI level; if I had to figure out how the performance of the IME will be influenced by such keys, then what should I learn now?
  • Notification -- On knowing the types of notification available in each platforms and how it appears, how do an IME app handle and show the notifications. Do this notification need a trigger from the app internals to appear by user action? What is the contribution of it in battery drain if the notification uses the wakelocks?
  • Dictionary -- It comes with the size either the user start adding the words to it or the products comes with limited words. Size matters on the mobile phone in space utilization aspect? If user can add the words to the dictionary, then the space utilization grows. What if the IME app supports for bilingual and multilingual? Then space occupied in the dictionary file for different language's word differs? As well the dictionary can be downloaded and gets updated regularly. Do this have any effect on low end devices? What other tests and factors I have to consider when testing for the Dictionary apart for the functionality? Did you forget the the file type and format?
  • Help -- In my opinion, the help section of IME is yet to become helpful especially when it comes to non-English language. For example, if the IME supports Indian language, how to use the keypad to type the words? Likewise, if it has to support different languages, then the pattern of using the keys in keypad to type the words in respective language, should be in the help section? This is open to discussion considering the context and who will the targeted audience of the IME app.
  • Error Handling -- I will use Ben Simo's FAILURE mnemonic here. I'm learning to use in UI and non-UI mode. Coming to the error handling, I understand 'error' is when a user perform actions which is not valid as per the product and the product has to inform the user without causing any lose to customer in product's context. Few devices may not support languages which is supported by the IME. While such language is used by the user, how will this be handled by the IME app?  This is one example. As this, figuring out the error handling in each factors mentioned here, helps.
  • Prediction -- Before I talk about how to test the prediction mechanism, how to bring in assistance of accessibility to the prediction feature in the IME app? Let us think on this. Now, the prediction in-turn builds the size of the dictionary and the predicted candidate words come from the dictionary which I have and building in the device. That said, dictionary and the content in it for the language chosen plays a role! IME app has to learn the user's typing and words usage style in the chosen language. That is not simple but doable and testable. The prediction comes from the user's input to the dictionary; if she or he does it incorrectly then the prediction that appears later will also be incorrect. Understanding the key:word and value:frequency in the algorithm used, helps.  Insertion, deletion, transposition, alternations, and duplicates all will come into picture as user types and sees the prediction in candidate words/list.  How do I program this? It has to be quick in calculation so I see the prediction appearing and changing as I type. How do I test this? What I have look for performance factor when the dictionary size is growing with words for a language which will contribute to the prediction?
  • Updates -- When updating the app, do the content of app which is available now will be retained or get cleared? The content of the IME app such as dictionary and languages should be untouched unless the app wants to sanitize and clean it as well. Post update how these contents will be used by the app?  This is one instance of thought among the several what we think while building and testing the IME app.
  • Trails -- The UI mark trails left after using keys and swyping over keys, do introduce any problem on the device?  If it is low end device, then such trails of gestures and keypad usage cause CPU and GPU overload?  This is one such thought to consider in programming and testing the IME with trails sign feature.  As this collect and build the list to consider for programming and testing this factor. What about the trails when using the Indian languages and other similar languages?
  • Sound -- Build the dictionary file size and test for keypad sound and other that you hear when using the IME app. Do you see any difference in speed and how the sound is heard to you? As this figure out what needs to be considered when programming and testing for the Sound factor with the IME app.
  • Locale -- On changing the locale on the device how the IME app changes accordingly? What has to be considered for programming for an IME app with locale? How to test it? For example, I have changed locale to reflect right to left language in the device and toggling between the bilingual in the IME app. How will the IME app have to behave with locale's context and selected mode context in app? As this, build the list to test for locale factor. Interestingly this clubs with most factors here very closely and will have dependency.
  • Orientation -- Orientation do have contribution in screen overdrawing, GPU rendering and memory consumed. Apart from this, how do the keypad look and evolve for different languages? How the keys and different UI areas look in the keypad with different language in different orientation of different screen size? This is a start area to explore.  Any key get missed in portrait mode but not in the landscape mode?
  • Cursor -- Do this remain consistent with device standard or will it have standard of the IME app? How the cursor assist here in accessibility? For example, the cursor blinks. Is there a case, where the cursor stands still without blinking? How do the IME behaves in that case? Moving of the cursor in the word you are typing or typed, changes the prediction shown? These are few examples to consider in programming and testing of cursor in the IME app.
  • Keys -- The visible part of the IME app to a user. Contextualizing the keys is one feature with respect to locale, language chosen and input being entered. How the key has to manage its space within it to show different characters when primary and secondary are enabled? Colors, themes, and Settings leave any influence on the keys? For example, the touch area sensitivity on keys.  As these what are the factors do I need to consider in programming and the testing the keys of IME in different screen size and hardware devices?


Sunday, September 25, 2016

IME Test Model - RICH VIP MUST PLUG AND HE PUTS LOCK



I was approached by my fellow Software Testing practitioners - Shristy and Suchismita for knowing and to have better structure for testing the IME - Input Method Editor.  On listening to their context of current practice and what they wanted to know by testing, I learned for first they need the essential design components of today's IME.

I had to make sure that, this learning is fair enough to start and from here they can assist themselves. On brainstorming together for few minutes, we learned, it is good for a start to have the key integral design components of IME. Have this, the testing can be channeled well in those areas as and how the context demands on priority.

Now we had listed down the IME design components fairly sufficing to start. What are the tests to be done on the app under the IME design components? It depends on the context of testing.  The testers here were able to identify the tests based on the context needs. But the challenge for them was -- knowing how to approach and categorize the IME app.  Now it is addressed with this IME Test Model, tester can help quickly to visualize and pick up the tests under the hood of respective design component of IME.





Credits are for Shristy and Suchismita for pairing up with me and in framing the mnemonic of this Test Model and categorization of it.  This can be referred and used for Android IME and iOS IME apps testing.