Showing posts with label High Performance App. Show all posts
Showing posts with label High Performance App. Show all posts

Saturday, December 21, 2024

Do Adding Cores Improves The System's Performance?


Hardware and Programming Language

If I have no understanding of -- sysem's architecture, how the system is designed and implemented technically knowing its business purpose, then I cannot test for performance rationally and technically, 

In this context, the awareness and understanding of below is necessary and important.

  1. The infrastructure where the system and its services are deployed and consumed
  2. The hardware on which the system is deployed
    • The understanding of the hardware along with its limitation
  3. The hardware on which the service of the deployed system is consumed
    • The limitation of consumer's hardware
  4. Understanding the CPU and its Cores
    • How we are programming the threads to exeucte on these Cores?
  5. The programming language used to implement the system; this play a vital role
    • Does it allow the threads to run on two and more different cores at a time?
    • Or, does it confine the threads to run on one Core?
  6. The way in which we have programmed the system for its instruction execution at a thread (subroutine) level
    • How the threads are implemented and how it exeuctes on a CPU and its Cores?
I should be aware of above said information when building high performant system and testing for the performance of a system.

The value and benefits of these information are not just limited to performance testing alone. It also helps in testing for seucrity and functionality.  The awareness of these information will give you an edge in the testing for performance.



CPU Cores and Software System's Performance


I hear this often and especially during the businness seasons time:
"We will add more CPU with more cores. This will improve the exeuction time and improves the performance.  The business will not be impacted."

Does this make sense technically?  What's your thought on above statement as a Test Engineer testing the system for different quality criteria?

If I do not have awareness on information that I said above, I will not be in a position to test and advocate better for performance.


Do Cores Added Reduce the Exeuction Time?

By just adding more cores to a CPU, it does not always speed up a program's exeuction time.   

I learn, if a program which is designed to run on multiple cores have threads, that must run on one core, then this will be a limitation for the maximum speed [by reducing execution time] which we can achieve by adding more cores.

Also, the programming language used will have a role.  If the programming language uses Global Interpreter Lock (GIL), then it makes sure that a process created out of it can run only one instruction at a time, despite of the cores it is currently using.  That is, though a process created has access to multiple cores in a given time, the insturctions will be running on just one core.  Which means, the threads cannot run on multiple cores.  The instruction will be running on just one core and just one thread at any point in time.  Eventually, this leads to higher exeuction time for a process of program.

Should I say high exeuction time is a high performance and high performant system?  That is contextual!  What do you say?  What should a consumer of the service (business) say about this performance?


To summarize, adding multiple cores to a CPU, does not necessarily speed up the exeuction time for a program.  It is dependent on how we have designed and written the program and the programming language used.


To know more about GIL

  • Global Interpreter Lock -- https://en.wikipedia.org/wiki/Global_interpreter_lock
  • https://langdev.stackexchange.com/questions/1873/what-is-a-global-interpreter-lock-and-why-would-an-interpreter-have-it

Sunday, March 3, 2024

Performance Test Report: Between the Effective and Ineffective Reports

 

In this post, I'm picking the thirteenth question from season two of 100 Days of Skilled Testing.

What is an effective way of reporting performance test results and mention some tools you have used in test execution, analysis, and reporting?

I see two questions.  I see it is not a wise attempt to learn these two questions combined as one.  In my opinion, the second question added, it dilutes and make the whole question vague.


What Should a Report Do?

The report should be contextual, compelling, influencing, and targeted to the intended audience to act upon on making a decision.  The software testing report is not an exemption from it.

The performance testing report should know
  • Who are its audiences?
  • How they read, relate and understand the information?
If this is ignored, the report will not serve the purpose of commissioned testing.  The effectiveness of a report cannot be determined solely on how the stakeholders responds to it.  

On understanding the risks and problems in the system's current capability, mentioned in the report, the stakeholder might not respond with an action to tune the performance aspects.  This could be for multiple factors including that of business.

Note that, a skilled and problem solving engineer understands the business and how it drives.  Just being technically skilled will not help an engineer to grow in a longer run in her or his career.  The system's performance tuning decisions most times will be driven by business.

Did the report persuade the stakeholders with an awareness, mutual understanding and agreement?  The report should drive this conversation.  If not, we have a problem.

On reading the performance testing report, do the stakeholders get an informed awareness on what happened during testing in the present capability of a system's performance criteria?  Do the stakeholders understand and mutually acknowledge how it benefits and costs the business?

This is the foremost value serving expected from a testing report.  If not, I look at how the data and story is presented in the report.

The bottom line is, did we mutually acknowledge, agree and understand on current capability and consequences?  If not, the basic purpose of the report is not met.


Performance Testing Report

The software testing is a high technical activity.   You agree or not to this, but, this is the reality.

Testing for performance is technical investigation activity. It includes the orchestrated study in correlation of 
  • hardware, operating system, network, tech stacks & software used in SDLC, architecture, designs, certain decisions, people, business and you - the test engineer.

The fundamental in-depth awareness and knowledge of these areas is essential and a necessity to analyze the performance's aspect.  The performance testing report will show this trait of you as a test engineer.

We have stakeholders who work in technical area and in non-tech area.  How to compile the effective performance testing report?

There is no one way or defined way of writing an effective performance testing report.  Figure out what works in context of your testing to have a effective report knowing - What Should a Report Do?


Outline of Persuading Performance Test Report


It is a technical story telling in a non-technical way with data, pictures, comparison by relating, metaphors, and contemporary history.  I compose the performance testing reports in line with business targets and objectives set.  I provide a metaphors to relate and know the value and cost.

At times, I will have two reports.  I share it with respective stakeholders.
  • One with non-technical summary and conclusion
  • The other with technical details, analysis and data from investigation
Sometimes, I include the above two reports in one report based on the context.

In overall, this will be in minimum as part of my performance testing report to start.
  1. What part of the system is being tested?
  2. Why that part of the system is being tested?
  3. Mentioning the vague performance requirements gathered from stakeholders.
  4. Refining and precising the performance requirements to be specific, contextual and deterministic.
  5. Who are the stakeholders of this report?
    • What sections the respective stakeholders to refer for the analysis and outcome?
  6. Problem statement of the performance testing statement
  7. Brief summary of performance testing outcome. [TLDR]
    1. What aspect of system's performance is evaluated and why?
    2. Brief summary of performance test carried out and outcome.
  8. Detailed Report with Technical Details
    1. Analysis and Technical Investigation
    2. Representation of data which is analyzed
    3. Identification of bottlenecks, risks, problems and its symptoms
    4. Summary of the test's outcome

You don't have to stick on to one format or a template.  Figure out what works well in your case so that the intent of your tests and outcome is understood by stakeholders.  Give a structure to your report!

The performance testing reports will have metrics, graphs, numbers and proposals.  The presence of metrics, graphs, numbers and the other said, does not make the report effective.  Then, what makes it effective?  When you call it effective?  When you accept it is not effective?  Only, you can figure it out to your context.  I can assist you here; pull me in.

There is no good [effective] report or not good [ineffective] report.  The report is either
  • From a team with skills, experience, trained, and practicing
  • From a team which is not trained, and, not practicing


Saturday, February 3, 2024

Performance Testing - What to Know Before User Behavior and Traffic Pattern?

 

This blog post is in series of 100 Days of Skilled Testing.  I see, I do not have to pick every questions asked in this series.  I pick and share to which I see, I can add value.

The twelfth question from the season two of 100 Days of Skilled Testing, is:

What strategies do you use to simulate realistic user behavior and traffic patterns when conducting performance tests?

The twelfth question asked is vague and it needs to be refined for preciseness to pick it up and continue.


The Question and the Gap

I see the below are missing in the above asked question:

  1. What aspect of performance is under evaluation?
  2. What is the system that is being evaluated for a performance's aspect?
  3. What part of the system is being evaluated for a performance's aspect?
    • Queuing? Messaging? Database I/O? Memory? Space? CPU? Client Performance? Functional Module?
  4. Who are the users?  What are their personas?
  5. How and where the users are accessing the system?
  6. What is the context of users accessing this system?
  7. What is the geo location of users who are accessing this system?
  8. How long these users are connected by accessing this system?
  9. Are there any differences among these users in their roles and privileges in accessing this system?
  10. Can the user access system through multiple interfaces?
  11. Are you assuming the user is on web browser and mobile apps to access this system?
  12. Is this system you are referring to, is a software system? Or any other system that is controlled environment like - access door, elevators, etc. ?
  13. You are asking to simulate the user behavior and traffic pattern.  Should I assume, I and you know or agree to any volume of user?  And, all these users are here for the same purpose when accessing the system?
  14. Are you considering any time or at a particular time when talking about the traffic pattern?
  15. Are there any unrealistic users who is accessing your system?  You say 'realistic user'.
    • Do you see that bots and non-human are also allowed as a user in your traffic?
  16. Have you evaluated this earlier in your system?
    • If yes, do you have the history and data for user behavior and traffic pattern?
    • If you don't have, do you allow to use or have your competitor's user behavior and traffic pattern data? 
  17. What is the tech stack of your system?
    • What part of your tech stack, you want to evaluate with this user behavior and traffic pattern?
  18. What is the architecture of your system?
  19. What part of your system and its architecture is being evaluated with this user behavior and traffic pattern?
  20. Are you running this exercise for the first time?  If not, where I can refer to previous exercises?
  21. How the interaction and events are handled from its start to completion?
    • What all are needed to complete the transaction in work flow?
    • How this transaction can go invalid for lack or incorrect data, state and action?
  22. What is spike, drop, saturation, expected, unexpected, and average numbers in the traffic coming in?
  23. What do you understand by traffic?  Do you mean number of requests coming in?
    • Do you mean the being committed I/O operations?
    • Do you mean the response received at the other end?
    • What is the definition of 'traffic' in this context?
  24. What is that you want to study and evaluate by the User Behavior and Traffic Pattern information gathered in this context?

Using the above questions, I will get an idea to proceed.

I will build a model from information I collect using above asked questions.  This model we will used to further in testing for a performance's aspect.  The value added to the performance test depends on this model as well.  To get a better model in context, it is useful to address the gaps.  From here, I start to think further.



What do you ask and look for when building a model for User Behavior and Traffic Pattern?



Sunday, November 19, 2023

Waterfall or Agile: Testing for Performance - Where to Start?

 

Do you understand the Agile?  I have shared my understanding here; give it a read.

The eighth question from season two of 100 Days of Skilled Testing is:

Can you share some best practices for conducting performance tests within an Agile development environment?


Best Practices and the Agile


The irony is, the Agile says, there is no best practice.  It asks, to tailor and fit the practice to the context so the continuous delivery and value is delivered consistently upholding the Agile's principles.  

Yet, we talk about the best practices in the Agile's context, like the eighth question asked here.

What is the effective way to test in the continuous delivery?

As a test engineer, how can I start thinking and testing for performance from the inception of a feature's thought?  I see, it is not hard to do so.  As you read further in this post, you will have a perspective and awareness to do it.


Performance in Waterfall and Agile

I learn, the performance is an experience.  It does not differ because of the Waterfall or Agile.  If the performance is not a pleasing experience, it will impact stakeholders no matter it is Waterfall or Agile.

But, the question when evaluating for the performance is -- where to start, when to start, how to start, and with what to start?

As of today, I do not see differences in the mindset and skills one has to have for testing of performance in Waterfall and Agile.  Could be the approach differs in certain phases here; otherwise, I see the same in both practices.

I will rephrase the eighth question to this:
What is your practice to evaluate the performance right from the start of product development in your project?
I do not want to wait until to hear -- the development is completed and deployed; now we can start running the performance tests.

What can I do as part of performance tests from the first day of development and first commit?  This is my intent and area to look in strategizing the testing and tests.



The Culture of Engineering

At the start and end of the day, when we developers start and finishes the work,

  • How the work is done and why, is defined by the engineering culture practiced by that organization.
    • The Performance Engineering of the software products and solution being built will be driven the by the culture practiced.

The Test Engineering and how we test and automate will be driven by the culture of engineering practiced in the organization.

Writing the code not just for building the functionality, but, also for performance is a culture driven factor.  The organization's culture for engineering practice drives it!



Testing for Performance - Where to Start?


I'm sharing my research work that I'm doing and experimenting on performance engineering and performance tests.  I'm seeing the results and value out of it and so are the stakeholders.

Today, we are getting skilled in exploring and testing without the requirement document and SLAs in hand.  Isn't it?  Haven't you?

I use my MVPT to figure out what are the minimum performance tests for the feature.  As part of this, I will explore with help available aids to evaluate the performance.

To start, I will use these questions to figure out the performance tests:
  • What is the minimum viable questioning performance tests that you have got to test this feature?
  • What is the minimum viable questioning performance tests that you have got to test this workflow?


Unit Tests for Time and Space Complexity


I will work closely with programmers to gather information on below when the code for the feature is committed as part of Unit Tests.
  • The execution time taken by the code of that feature - the Big O Notations for space and time complexity
    • Usually the Unit Tests focuses on functional tests and clean code practice
    • But, when we test team ask and push for performance data, this can come as part of Unit Tests
      • An architect or a principal engineer can set an expectation on
        • What should be the time and space complexity of a code for a feature?
          • Each functions and blocks need to be evaluated on this
          • As said earlier, this depends on a engineering practice culture of an organization
            • If the culture wants it, it will be there; else just the functional code will be delivered and not the performance code
      • If the time and space complexity analysis outcome is not as expected, the code written has to rethought and refactored
        • The review process need to put it back
        • The comment with data has to be published
          • This will be useful to model the performance tests by test engineers who will be working on performance tests
      • Doesn't it look a like a effective useful practice as part of Performance Engineering right in the early stage?
        • This is very well applicable to projects running on Agile or Waterfall

Do you have this in your project and Unit Tests written?

The time and space complexity questions should not be confined just to the SDETs [test engineer] interview.  A test engineer has to ask for it and apply it in her or his day-to-day work.


Profiling Tests by Test Engineers


We testers do not get into product's code analysis.  We have to build skill to run the profiling on product's code and analyze the resources data.
  • Test Engineers can test the feature's code with the help of IDE's profiling (runtime analysis) and collect the performance data by identifying the performance bottlenecks
    • This runtime analysis can profile for
      • Memory snapshots
      • Thread analysis
      • Monitoring resources
      • CPU and allocation profiling
      • And, more
      • The problems and risks can be reported upon analysis
    • Compare the two different solution's approach performance data
This information will tell and indicate where is the risk and problem when we deploy the code.  In my opinion, this is a useful information in modeling further performance tests.  This information is first-hand information which is very powerful before we start using any other performance testing strategies and tools to aid the tests.



Get Started with Performance Engineering and Tests


These are available in the IDE.  We think of performance testing tools and ask how to test for performance.  To be precise, we test developers (test engineers) should change our mind and shift for first.  If not, as I say, we will be the bottleneck for first to ourselves.  Did you know this way of testing for performance?  Why not you introduce this in your project and organization?

If seen, these test practices can be used right from the day we commit the feature's code. This is a place to start for the performance tests.   This will be a differentiator together with MVPT and guides the MVPT to design effective performance tests in the context.

I do not say these are best practices and there is no best practices.  But this is a useful practice when the organization and stakeholders ask for it.  Let your organization and stakeholders know how well you can test for performance right from first commit of product's code.

To stop and end here,
  1. Just do not test for functionality from day one, also test for the performance from the day one.
  2. Influence your organization's engineering culture and developers not just for developing functional code, but, also for the performance code




Is Performance a Perception to an Engineer and User?

 

When hearing about performance from customers and engineers on team, I see each are having a perception of it on using a product.  To one it was fast enough, to another it was as usual and for one it was too slow.  Each are expressing their perception.  But, what is the performance?


What is Performance?


I see, understanding and knowing "What is performance?" is important for everyone who is involved in building the software system and product.  In my opinion, this should be the start point.  It is beneficial, when everyone has a shared common understanding to it as a team and business.

Performance is an emotion to a user!  Technically, the performance has multiple facets to it for understanding the capability of a system and its sub-systems.


Facets of the Performance


What facet do you consider and call it out as Performance?  This is another point where most of us do not align.  For example, below are the different facets that we usually hear and read a lot:
  • Heap and Memory
    • Threads
      • Not supporting for concurrency
      • Not handled well in concurrency
      • Holding up other processes and causing bottleneck cases
    • Memory Leaks
    • Concurrency
    • Memory not reclaimed
    • and, more ....
  • CPU consumption
    • Open connections and its I/O
      • Held up in processing requests
    • No enough resources to process
    • and, more ....
  • DB I/O
    • Open connections
    • Unindexed data
    • SPs and Queries holding the transactions
    • Incorrect configurations of DB Server and nodes
    • and, more ...
  • Disk I/O
    • Running out of space
    • RW I/O not responsive
  • Network consumption
    • Unmanageable transactions
    • Transaction's data, size and time
  • Latency
    • Latency in which interfaces?
    • Ineffective caching, queuing and messaging
  • GUI not rendered or painted
    • GUI exhibiting jank behavior
    • GUI partially rendered
    • GUI not in a interactive state
    • GUI not responding to an action
    • GUI and UI loading multiple times with no room for interaction 
  • Terminal yet to return and show the prompt
  • Older and deprecated libraries with latest dependencies
  • Server, orchestration, and sub-systems configured incorrectly
  • Hardware resources and its specifications used in a context
  • Display refreshing rate and frames lost with GUI rendered
  • Heat dissipation
    • From the hardware
    • Experienced in the environment
  • Time taken for a request to reach the actual end point
  • Execution time on receiving a request
  • And, more...

Further we have classified it to frontend and backend; both are important and equally needed.  The webpage has got different KPIs and metrics to determine where do its performance stand.  Likewise, for backend.

Which facet of performance need to be evaluated and in which phase?  Why?  The perception will be established when testing, on how we test it, and how one uses the product.

With all these for performance, where to start and what to look at?  This is one of the question with which we are left in Waterfall and Agile.  That is where the eighth question of 100 Days in Skilled Testing comes in -- Can you share some best practices for conducting performance tests within an Agile development environment?

Have you asked these questions to yourself and team?
  1. What is performance to you and to your stakeholders?
  2. What should I consider in evaluating for the performance of your software system?
  3. What is the practice I want to pick to evaluate the performance?
  4. Where do I start and how?  


Saturday, October 21, 2023

The Internal Metrics of High Performance Mobile App

 

Do you have a question -- What are KPIs and Metrics and the differences between them?  Then read this blog post.  It will help in knowing the differences and how each help the other mutually and remain exclusive.

The sixth question from season two of 100 Days of Skilled Testing is:

How do you determine the essential performance metrics and Key Performance Indicators (KPIs) when assessing the performance of a mobile native app?


Why I Focus on Metrics Here?

Here, in this blog post, I will focus on Metrics and not KPIs.  If you have read the above referenced blog post on KPIs and Metrics, you will learn,

  • The KPIs are defined objectives by the business.
    • Knowing this is important.
    • But, to accomplish the KPIs objective, one has to extract the contextually suitable metric from the system and have to evaluate it.

As a test engineer, I evaluate from technical perspective with an orientation of business thoughts.

  • So, the identifying and defining the metrics makes more sense to my role.
    • Hence, I pick the Metrics.

A KPI example for a mobile app,

  • The set percentage [what percentage?] of five start rating in store with positive emotions in a review.

What may be the KPIs, I will have to correlate it with [a] metrics so that we work towards accomplishing the KPIs set.



What is Performance to Mobile Native App?


The Native App

A native mobile app is an app developed for a particular mobile device or platform like Android and iOS.  The native apps developed for one platform cannot be run on another platform.  That is, Android app cannot run on iPhone, and vice versa.

The Android apps are written in Kotlin, today.  The iOS apps are written in Swift.  The native apps can be developed in a way that it can run both in offline and online mode based on its business objectives.

An example of the native app which we can easily understand is,

  • WhatsApp for Android and iOS devices


Performance and Native App

For first, my mobile device is not my customers device.  The Android device fragmentation makes it much more challenge with computing power offered for devices by OEMs.

Though, iOS have its fragmentation on OS version and device's computing power, it is not huge when compared with fragmentation of Android devices.  This is a challenge for Android mobile app in all aspects and especially in performance.

The performance will indicate different advantage, risks and problems to a native app on a platform and its devices.  The performance can be classified into different areas for native mobile apps.

It can range from, being deep technical areas to the experience a user expresses in using the app in a given context.  Everything is performance here!

Hence it is not easy when talking about performance in the mobile's app space.  While it is so ambiguous and hard subject, how one can pull the metrics for the performance here?

From a technical perspective, the word "performance" is not specific; it is vague.  If one says, the mobile app is performing well, what does it mean?

  • Is it fast?
  • Is it consuming low battery power?
  • Is it consuming less memory?
  • Is it not consuming much network?
  • Is it smooth to interact, responsive, and no jank experience?
  • Is it intuitive and secure?
  • Is it number of crashes?
  • Is it having a consistent update and bug fixes?

What you say?

All these leads to a question - What is a high performance app?  You ask this question to yourself and to your tech team, business team and consumers.  Figure out what is a high performance app to them.  This helps!


The Common Metrics - Android and iOS


Below are some of the common metrics for which a Test Engineer extracts data and evaluate it via testing and automation.

  1. App Install Time
  2. App Launch Time
    1. Cold Start
    2. Warm Start
    3. Hot Start
  3. Private Memory Size of App on available Heap
  4. Number of Views
  5. Garbage Collection - Frequency
  6. Data Residual Size and Sharing
  7. Network Payload Size
  8. Energy Consumption in Workflows
  9. Wakelocks and its Impact
  10. Frames Skipped
  11. Open Threads and Processes
  12. Storage & Data Size
  13. App Size

And, more.  Each of these are own deep area for analysis and tuning the performance.

I have been practicing this for last 11 years in my testing.  It is one of my research areas in Software Testing & Engineering

These days, instrumentation also offers different metrics for the mobile apps which can correlate with the KPIs set for the native mobile apps.



To conclude, start understanding the internals of Android and iOS.  It opens up you to the performance and practice of high performance apps.

I'm happy to share if you are interested in practicing these subjects and testing.  Let us connect and converse!