Hardware and Programming Language
If I have no understanding of -- sysem's architecture, how the system is designed and implemented technically knowing its business purpose, then I cannot test for performance rationally and technically,
In this context, the awareness and understanding of below is necessary and important.
- The infrastructure where the system and its services are deployed and consumed
- The hardware on which the system is deployed
- The understanding of the hardware along with its limitation
- The hardware on which the service of the deployed system is consumed
- The limitation of consumer's hardware
- Understanding the CPU and its Cores
- How we are programming the threads to exeucte on these Cores?
- The programming language used to implement the system; this play a vital role
- Does it allow the threads to run on two and more different cores at a time?
- Or, does it confine the threads to run on one Core?
- The way in which we have programmed the system for its instruction execution at a thread (subroutine) level
- How the threads are implemented and how it exeuctes on a CPU and its Cores?
CPU Cores and Software System's Performance
"We will add more CPU with more cores. This will improve the exeuction time and improves the performance. The business will not be impacted."
Does this make sense technically? What's your thought on above statement as a Test Engineer testing the system for different quality criteria?
If I do not have awareness on information that I said above, I will not be in a position to test and advocate better for performance.
Do Cores Added Reduce the Exeuction Time?
By just adding more cores to a CPU, it does not always speed up a program's exeuction time.
I learn, if a program which is designed to run on multiple cores have threads, that must run on one core, then this will be a limitation for the maximum speed [by reducing execution time] which we can achieve by adding more cores.
Also, the programming language used will have a role. If the programming language uses Global Interpreter Lock (GIL), then it makes sure that a process created out of it can run only one instruction at a time, despite of the cores it is currently using. That is, though a process created has access to multiple cores in a given time, the insturctions will be running on just one core. Which means, the threads cannot run on multiple cores. The instruction will be running on just one core and just one thread at any point in time. Eventually, this leads to higher exeuction time for a process of program.
Should I say high exeuction time is a high performance and high performant system? That is contextual! What do you say? What should a consumer of the service (business) say about this performance?
To summarize, adding multiple cores to a CPU, does not necessarily speed up the exeuction time for a program. It is dependent on how we have designed and written the program and the programming language used.
To know more about GIL
- Global Interpreter Lock -- https://en.wikipedia.org/wiki/Global_interpreter_lock
- https://langdev.stackexchange.com/questions/1873/what-is-a-global-interpreter-lock-and-why-would-an-interpreter-have-it