Saturday, February 3, 2024

Performance Testing - What to Know Before User Behavior and Traffic Pattern?

 

This blog post is in series of 100 Days of Skilled Testing.  I see, I do not have to pick every questions asked in this series.  I pick and share to which I see, I can add value.

The twelfth question from the season two of 100 Days of Skilled Testing, is:

What strategies do you use to simulate realistic user behavior and traffic patterns when conducting performance tests?

The twelfth question asked is vague and it needs to be refined for preciseness to pick it up and continue.


The Question and the Gap

I see the below are missing in the above asked question:

  1. What aspect of performance is under evaluation?
  2. What is the system that is being evaluated for a performance's aspect?
  3. What part of the system is being evaluated for a performance's aspect?
    • Queuing? Messaging? Database I/O? Memory? Space? CPU? Client Performance? Functional Module?
  4. Who are the users?  What are their personas?
  5. How and where the users are accessing the system?
  6. What is the context of users accessing this system?
  7. What is the geo location of users who are accessing this system?
  8. How long these users are connected by accessing this system?
  9. Are there any differences among these users in their roles and privileges in accessing this system?
  10. Can the user access system through multiple interfaces?
  11. Are you assuming the user is on web browser and mobile apps to access this system?
  12. Is this system you are referring to, is a software system? Or any other system that is controlled environment like - access door, elevators, etc. ?
  13. You are asking to simulate the user behavior and traffic pattern.  Should I assume, I and you know or agree to any volume of user?  And, all these users are here for the same purpose when accessing the system?
  14. Are you considering any time or at a particular time when talking about the traffic pattern?
  15. Are there any unrealistic users who is accessing your system?  You say 'realistic user'.
    • Do you see that bots and non-human are also allowed as a user in your traffic?
  16. Have you evaluated this earlier in your system?
    • If yes, do you have the history and data for user behavior and traffic pattern?
    • If you don't have, do you allow to use or have your competitor's user behavior and traffic pattern data? 
  17. What is the tech stack of your system?
    • What part of your tech stack, you want to evaluate with this user behavior and traffic pattern?
  18. What is the architecture of your system?
  19. What part of your system and its architecture is being evaluated with this user behavior and traffic pattern?
  20. Are you running this exercise for the first time?  If not, where I can refer to previous exercises?
  21. How the interaction and events are handled from its start to completion?
    • What all are needed to complete the transaction in work flow?
    • How this transaction can go invalid for lack or incorrect data, state and action?
  22. What is spike, drop, saturation, expected, unexpected, and average numbers in the traffic coming in?
  23. What do you understand by traffic?  Do you mean number of requests coming in?
    • Do you mean the being committed I/O operations?
    • Do you mean the response received at the other end?
    • What is the definition of 'traffic' in this context?
  24. What is that you want to study and evaluate by the User Behavior and Traffic Pattern information gathered in this context?

Using the above questions, I will get an idea to proceed.

I will build a model from information I collect using above asked questions.  This model we will used to further in testing for a performance's aspect.  The value added to the performance test depends on this model as well.  To get a better model in context, it is useful to address the gaps.  From here, I start to think further.



What do you ask and look for when building a model for User Behavior and Traffic Pattern?



Performance Testing - The Unusual Ignorance in Practice & Culture

 

I'm continuing to share my experiences and learning for100 Days of Skilled Testing series.  I want to keep it short and as a mini blog posts.  If you see, the detailed insights and conversations needed, let us get in touch.


The ninth question from season two of  100 Days of Skilled Testing is

What are some common mistakes you see people making while doing performance testing?  How do they avoid it?


Mistakes or Ignorance?

It is mistake when I do an action though I'm aware that it is not right in the context.

I do not want to label what I share in this blog post as mistake.  But, I call it as ignorance despite having or not having the awareness, and the experience.

The ignorance said here is not just tied to the SDLC.  It is also tied to the organization's practice and culture that can create problems.

To this blog post's context, I categorize the ignorance in these categories -- Practitioner and Organization.

  1. Practitioner's ignorance
    • Not understanding the performance, performance engineering, and performance testing
      • When said performance testing, taking it as - "It is load testing"
      • No awareness on what is performance and performance engineering
        • Going to the tools immediately to solve the problem while not knowing what is the performance problem statement
      • Be it web, API, mobile or anything,
        • Going to one tool or tools and running tests
      • No much thinking on how to design the tests in the performance testing being done
      • Ignoring Math and Statistics, and its importance in Performance analysis
      • No idea on the system's architecture, and how it works
        • Why it is the way it is?
      • The idea of end-to-end is extended and used in testing for performance and having hard time to understand and interpret the performance data
        • How many end-to-end your tests have identified?
        • Can we test for performance to all these identified and unidentified end-to-end?
      • Relying on the resource/content in internet and applying or using it in one's context without understanding it
      • No idea on the tech stack and how to utilize the testability offered by it in evaluating the performance
      • Not using or asking for testability
      • Getting hung to most spoken and discussed 2 or 3 tools on the internet
      • Applying tools and calling out it as performance testing
      • No attempting to understand the infrastructure and resources
        • How it impacts and influences the performance evaluation and its data
      • Idea on Saturation of resources
        • Thinking it as a problem
        • Thinking it as not a problem
      • Not working to identify where will be the next bottleneck when solving a current bottleneck
      • What to measure?
      • How to measure?
      • When to measure?
      • What to look when measuring?
      • Not understanding the OS, Hardware resources, Tech Stacks, Libraries, Frameworks, Programming Language, CPU & Cores, Network, Orchestration, and more
      • Not knowing the tool and what it offers
        • I learn the tool everyday; today, it is not the same to me compared to yesterday
          • I discover something new that I was not aware of what it offered and exist
          • I learn the new ways of using the tool in different approaches
      • No story in the report with information/image that is self-describable to most who reads it
      • And, more; but the above said resonates with most of us
  2. Organization's ignorance
    • At the org level, for first and to start, it is ignorance in Performance Engineering
      • Ignoring the practice of performance engineering in what is built and deployed
      • Thinking and advocating, increasing the hardware resources will increase and better the performance
        • In fact, it will deteriorate over a period of time no matter how much the resources are scaled up and added
      • Ignoring the performance evaluation and its presence in CI-CD pipeline
      • The performance tests on CI-CD pipeline should not take beyond few minutes
        • What is that "few minutes"?
      • Not prioritizing the importance of having the requirements for Performance Engineering

Recently, I was asked a question - How to evaluate the login performance of a mobile app using a tool "x"?

In another case, I see, a controller having all HTTP requests made when using web browser. Running these requests and trying to learn the numbers using a tool.


I do not say this is wrong way of doing.  That is a start.

But, we should NOT stay here thinking this is a performance engineering and that is how to run tests for learning a performance aspect[s].


To end, the performance is not just - how [why, when, what, where] fast or slow?  If that is your definition, you are not wrong!  That is a start and good for start; but, do not stick on to it alone and call performance.   It is capability.  It is about getting what I want in the way I have been promised and I expect; this is contextual, subjective and relative.  The capability leads to an experience.  What is that experience experienced?

Sometimes, serving the requests by what you call as slow, is a performance.  What is slow, here?

The words fast and slow are subjective, contextual and relative.  It is one small part of performance engineering.

That said, let me know, what have you been ignoring and unaware in practice of Performance Engineering & Testing?