Showing posts with label Backend Performance. Show all posts
Showing posts with label Backend Performance. Show all posts

Saturday, December 21, 2024

Do Adding Cores Improves The System's Performance?


Hardware and Programming Language

If I have no understanding of -- sysem's architecture, how the system is designed and implemented technically knowing its business purpose, then I cannot test for performance rationally and technically, 

In this context, the awareness and understanding of below is necessary and important.

  1. The infrastructure where the system and its services are deployed and consumed
  2. The hardware on which the system is deployed
    • The understanding of the hardware along with its limitation
  3. The hardware on which the service of the deployed system is consumed
    • The limitation of consumer's hardware
  4. Understanding the CPU and its Cores
    • How we are programming the threads to exeucte on these Cores?
  5. The programming language used to implement the system; this play a vital role
    • Does it allow the threads to run on two and more different cores at a time?
    • Or, does it confine the threads to run on one Core?
  6. The way in which we have programmed the system for its instruction execution at a thread (subroutine) level
    • How the threads are implemented and how it exeuctes on a CPU and its Cores?
I should be aware of above said information when building high performant system and testing for the performance of a system.

The value and benefits of these information are not just limited to performance testing alone. It also helps in testing for seucrity and functionality.  The awareness of these information will give you an edge in the testing for performance.



CPU Cores and Software System's Performance


I hear this often and especially during the businness seasons time:
"We will add more CPU with more cores. This will improve the exeuction time and improves the performance.  The business will not be impacted."

Does this make sense technically?  What's your thought on above statement as a Test Engineer testing the system for different quality criteria?

If I do not have awareness on information that I said above, I will not be in a position to test and advocate better for performance.


Do Cores Added Reduce the Exeuction Time?

By just adding more cores to a CPU, it does not always speed up a program's exeuction time.   

I learn, if a program which is designed to run on multiple cores have threads, that must run on one core, then this will be a limitation for the maximum speed [by reducing execution time] which we can achieve by adding more cores.

Also, the programming language used will have a role.  If the programming language uses Global Interpreter Lock (GIL), then it makes sure that a process created out of it can run only one instruction at a time, despite of the cores it is currently using.  That is, though a process created has access to multiple cores in a given time, the insturctions will be running on just one core.  Which means, the threads cannot run on multiple cores.  The instruction will be running on just one core and just one thread at any point in time.  Eventually, this leads to higher exeuction time for a process of program.

Should I say high exeuction time is a high performance and high performant system?  That is contextual!  What do you say?  What should a consumer of the service (business) say about this performance?


To summarize, adding multiple cores to a CPU, does not necessarily speed up the exeuction time for a program.  It is dependent on how we have designed and written the program and the programming language used.


To know more about GIL

  • Global Interpreter Lock -- https://en.wikipedia.org/wiki/Global_interpreter_lock
  • https://langdev.stackexchange.com/questions/1873/what-is-a-global-interpreter-lock-and-why-would-an-interpreter-have-it

Sunday, March 3, 2024

Performance Test Report: Between the Effective and Ineffective Reports

 

In this post, I'm picking the thirteenth question from season two of 100 Days of Skilled Testing.

What is an effective way of reporting performance test results and mention some tools you have used in test execution, analysis, and reporting?

I see two questions.  I see it is not a wise attempt to learn these two questions combined as one.  In my opinion, the second question added, it dilutes and make the whole question vague.


What Should a Report Do?

The report should be contextual, compelling, influencing, and targeted to the intended audience to act upon on making a decision.  The software testing report is not an exemption from it.

The performance testing report should know
  • Who are its audiences?
  • How they read, relate and understand the information?
If this is ignored, the report will not serve the purpose of commissioned testing.  The effectiveness of a report cannot be determined solely on how the stakeholders responds to it.  

On understanding the risks and problems in the system's current capability, mentioned in the report, the stakeholder might not respond with an action to tune the performance aspects.  This could be for multiple factors including that of business.

Note that, a skilled and problem solving engineer understands the business and how it drives.  Just being technically skilled will not help an engineer to grow in a longer run in her or his career.  The system's performance tuning decisions most times will be driven by business.

Did the report persuade the stakeholders with an awareness, mutual understanding and agreement?  The report should drive this conversation.  If not, we have a problem.

On reading the performance testing report, do the stakeholders get an informed awareness on what happened during testing in the present capability of a system's performance criteria?  Do the stakeholders understand and mutually acknowledge how it benefits and costs the business?

This is the foremost value serving expected from a testing report.  If not, I look at how the data and story is presented in the report.

The bottom line is, did we mutually acknowledge, agree and understand on current capability and consequences?  If not, the basic purpose of the report is not met.


Performance Testing Report

The software testing is a high technical activity.   You agree or not to this, but, this is the reality.

Testing for performance is technical investigation activity. It includes the orchestrated study in correlation of 
  • hardware, operating system, network, tech stacks & software used in SDLC, architecture, designs, certain decisions, people, business and you - the test engineer.

The fundamental in-depth awareness and knowledge of these areas is essential and a necessity to analyze the performance's aspect.  The performance testing report will show this trait of you as a test engineer.

We have stakeholders who work in technical area and in non-tech area.  How to compile the effective performance testing report?

There is no one way or defined way of writing an effective performance testing report.  Figure out what works in context of your testing to have a effective report knowing - What Should a Report Do?


Outline of Persuading Performance Test Report


It is a technical story telling in a non-technical way with data, pictures, comparison by relating, metaphors, and contemporary history.  I compose the performance testing reports in line with business targets and objectives set.  I provide a metaphors to relate and know the value and cost.

At times, I will have two reports.  I share it with respective stakeholders.
  • One with non-technical summary and conclusion
  • The other with technical details, analysis and data from investigation
Sometimes, I include the above two reports in one report based on the context.

In overall, this will be in minimum as part of my performance testing report to start.
  1. What part of the system is being tested?
  2. Why that part of the system is being tested?
  3. Mentioning the vague performance requirements gathered from stakeholders.
  4. Refining and precising the performance requirements to be specific, contextual and deterministic.
  5. Who are the stakeholders of this report?
    • What sections the respective stakeholders to refer for the analysis and outcome?
  6. Problem statement of the performance testing statement
  7. Brief summary of performance testing outcome. [TLDR]
    1. What aspect of system's performance is evaluated and why?
    2. Brief summary of performance test carried out and outcome.
  8. Detailed Report with Technical Details
    1. Analysis and Technical Investigation
    2. Representation of data which is analyzed
    3. Identification of bottlenecks, risks, problems and its symptoms
    4. Summary of the test's outcome

You don't have to stick on to one format or a template.  Figure out what works well in your case so that the intent of your tests and outcome is understood by stakeholders.  Give a structure to your report!

The performance testing reports will have metrics, graphs, numbers and proposals.  The presence of metrics, graphs, numbers and the other said, does not make the report effective.  Then, what makes it effective?  When you call it effective?  When you accept it is not effective?  Only, you can figure it out to your context.  I can assist you here; pull me in.

There is no good [effective] report or not good [ineffective] report.  The report is either
  • From a team with skills, experience, trained, and practicing
  • From a team which is not trained, and, not practicing


Sunday, November 19, 2023

Waterfall or Agile: Testing for Performance - Where to Start?

 

Do you understand the Agile?  I have shared my understanding here; give it a read.

The eighth question from season two of 100 Days of Skilled Testing is:

Can you share some best practices for conducting performance tests within an Agile development environment?


Best Practices and the Agile


The irony is, the Agile says, there is no best practice.  It asks, to tailor and fit the practice to the context so the continuous delivery and value is delivered consistently upholding the Agile's principles.  

Yet, we talk about the best practices in the Agile's context, like the eighth question asked here.

What is the effective way to test in the continuous delivery?

As a test engineer, how can I start thinking and testing for performance from the inception of a feature's thought?  I see, it is not hard to do so.  As you read further in this post, you will have a perspective and awareness to do it.


Performance in Waterfall and Agile

I learn, the performance is an experience.  It does not differ because of the Waterfall or Agile.  If the performance is not a pleasing experience, it will impact stakeholders no matter it is Waterfall or Agile.

But, the question when evaluating for the performance is -- where to start, when to start, how to start, and with what to start?

As of today, I do not see differences in the mindset and skills one has to have for testing of performance in Waterfall and Agile.  Could be the approach differs in certain phases here; otherwise, I see the same in both practices.

I will rephrase the eighth question to this:
What is your practice to evaluate the performance right from the start of product development in your project?
I do not want to wait until to hear -- the development is completed and deployed; now we can start running the performance tests.

What can I do as part of performance tests from the first day of development and first commit?  This is my intent and area to look in strategizing the testing and tests.



The Culture of Engineering

At the start and end of the day, when we developers start and finishes the work,

  • How the work is done and why, is defined by the engineering culture practiced by that organization.
    • The Performance Engineering of the software products and solution being built will be driven the by the culture practiced.

The Test Engineering and how we test and automate will be driven by the culture of engineering practiced in the organization.

Writing the code not just for building the functionality, but, also for performance is a culture driven factor.  The organization's culture for engineering practice drives it!



Testing for Performance - Where to Start?


I'm sharing my research work that I'm doing and experimenting on performance engineering and performance tests.  I'm seeing the results and value out of it and so are the stakeholders.

Today, we are getting skilled in exploring and testing without the requirement document and SLAs in hand.  Isn't it?  Haven't you?

I use my MVPT to figure out what are the minimum performance tests for the feature.  As part of this, I will explore with help available aids to evaluate the performance.

To start, I will use these questions to figure out the performance tests:
  • What is the minimum viable questioning performance tests that you have got to test this feature?
  • What is the minimum viable questioning performance tests that you have got to test this workflow?


Unit Tests for Time and Space Complexity


I will work closely with programmers to gather information on below when the code for the feature is committed as part of Unit Tests.
  • The execution time taken by the code of that feature - the Big O Notations for space and time complexity
    • Usually the Unit Tests focuses on functional tests and clean code practice
    • But, when we test team ask and push for performance data, this can come as part of Unit Tests
      • An architect or a principal engineer can set an expectation on
        • What should be the time and space complexity of a code for a feature?
          • Each functions and blocks need to be evaluated on this
          • As said earlier, this depends on a engineering practice culture of an organization
            • If the culture wants it, it will be there; else just the functional code will be delivered and not the performance code
      • If the time and space complexity analysis outcome is not as expected, the code written has to rethought and refactored
        • The review process need to put it back
        • The comment with data has to be published
          • This will be useful to model the performance tests by test engineers who will be working on performance tests
      • Doesn't it look a like a effective useful practice as part of Performance Engineering right in the early stage?
        • This is very well applicable to projects running on Agile or Waterfall

Do you have this in your project and Unit Tests written?

The time and space complexity questions should not be confined just to the SDETs [test engineer] interview.  A test engineer has to ask for it and apply it in her or his day-to-day work.


Profiling Tests by Test Engineers


We testers do not get into product's code analysis.  We have to build skill to run the profiling on product's code and analyze the resources data.
  • Test Engineers can test the feature's code with the help of IDE's profiling (runtime analysis) and collect the performance data by identifying the performance bottlenecks
    • This runtime analysis can profile for
      • Memory snapshots
      • Thread analysis
      • Monitoring resources
      • CPU and allocation profiling
      • And, more
      • The problems and risks can be reported upon analysis
    • Compare the two different solution's approach performance data
This information will tell and indicate where is the risk and problem when we deploy the code.  In my opinion, this is a useful information in modeling further performance tests.  This information is first-hand information which is very powerful before we start using any other performance testing strategies and tools to aid the tests.



Get Started with Performance Engineering and Tests


These are available in the IDE.  We think of performance testing tools and ask how to test for performance.  To be precise, we test developers (test engineers) should change our mind and shift for first.  If not, as I say, we will be the bottleneck for first to ourselves.  Did you know this way of testing for performance?  Why not you introduce this in your project and organization?

If seen, these test practices can be used right from the day we commit the feature's code. This is a place to start for the performance tests.   This will be a differentiator together with MVPT and guides the MVPT to design effective performance tests in the context.

I do not say these are best practices and there is no best practices.  But this is a useful practice when the organization and stakeholders ask for it.  Let your organization and stakeholders know how well you can test for performance right from first commit of product's code.

To stop and end here,
  1. Just do not test for functionality from day one, also test for the performance from the day one.
  2. Influence your organization's engineering culture and developers not just for developing functional code, but, also for the performance code




Is Performance a Perception to an Engineer and User?

 

When hearing about performance from customers and engineers on team, I see each are having a perception of it on using a product.  To one it was fast enough, to another it was as usual and for one it was too slow.  Each are expressing their perception.  But, what is the performance?


What is Performance?


I see, understanding and knowing "What is performance?" is important for everyone who is involved in building the software system and product.  In my opinion, this should be the start point.  It is beneficial, when everyone has a shared common understanding to it as a team and business.

Performance is an emotion to a user!  Technically, the performance has multiple facets to it for understanding the capability of a system and its sub-systems.


Facets of the Performance


What facet do you consider and call it out as Performance?  This is another point where most of us do not align.  For example, below are the different facets that we usually hear and read a lot:
  • Heap and Memory
    • Threads
      • Not supporting for concurrency
      • Not handled well in concurrency
      • Holding up other processes and causing bottleneck cases
    • Memory Leaks
    • Concurrency
    • Memory not reclaimed
    • and, more ....
  • CPU consumption
    • Open connections and its I/O
      • Held up in processing requests
    • No enough resources to process
    • and, more ....
  • DB I/O
    • Open connections
    • Unindexed data
    • SPs and Queries holding the transactions
    • Incorrect configurations of DB Server and nodes
    • and, more ...
  • Disk I/O
    • Running out of space
    • RW I/O not responsive
  • Network consumption
    • Unmanageable transactions
    • Transaction's data, size and time
  • Latency
    • Latency in which interfaces?
    • Ineffective caching, queuing and messaging
  • GUI not rendered or painted
    • GUI exhibiting jank behavior
    • GUI partially rendered
    • GUI not in a interactive state
    • GUI not responding to an action
    • GUI and UI loading multiple times with no room for interaction 
  • Terminal yet to return and show the prompt
  • Older and deprecated libraries with latest dependencies
  • Server, orchestration, and sub-systems configured incorrectly
  • Hardware resources and its specifications used in a context
  • Display refreshing rate and frames lost with GUI rendered
  • Heat dissipation
    • From the hardware
    • Experienced in the environment
  • Time taken for a request to reach the actual end point
  • Execution time on receiving a request
  • And, more...

Further we have classified it to frontend and backend; both are important and equally needed.  The webpage has got different KPIs and metrics to determine where do its performance stand.  Likewise, for backend.

Which facet of performance need to be evaluated and in which phase?  Why?  The perception will be established when testing, on how we test it, and how one uses the product.

With all these for performance, where to start and what to look at?  This is one of the question with which we are left in Waterfall and Agile.  That is where the eighth question of 100 Days in Skilled Testing comes in -- Can you share some best practices for conducting performance tests within an Agile development environment?

Have you asked these questions to yourself and team?
  1. What is performance to you and to your stakeholders?
  2. What should I consider in evaluating for the performance of your software system?
  3. What is the practice I want to pick to evaluate the performance?
  4. Where do I start and how?