Wednesday, July 5, 2017

PID Cat - Reading the logs and test investigating the Android apps



Reading the logs and helping self to design better tests is one of the key skill for a tester, is my opinion.  I insist and encourage the testers to practice it. I assist in learning the log and making use of it.  If you are not sure whether the product you are testing has a log or not, it is time to know it.

It can be server logs, proxy logs, client logs, hardware logs, deployment logs, etc.  The log contents and type of logs will vary from product to product.  What goes into log will also vary and it depends on what is the log level set to print into log file.

It happens that the logging will be turned off or will be set to minimal saying it consumes the disk space and IO of the box where it exists. On the other hands, the DevOps and programmers usually monitor the logs in production.  The logs are one of the most useful utility to know whats happening in the production.  While they use it in production environment, can't we testers use the same in the staging and pre-production environments? Won't that help us to test better? Yes, it will help!

Understanding how the logging feature is written for the product is one of the key task usually missed by a practicing tester. The way of interpreting the observations and further test design will be influenced upon learning how to use the logs.  

In this post, I will share one tool which helps in context of testing the Android mobile apps. The logs and Android devices; if you say, "logcat", yes; it helps to print the logs in different logging levels. What if I want to pick for particular Android app from the debug stream of logcat log? It is not an easy task with logcat's output stream. 

I have been using the PID Cat for years now and I find it very useful. It is written by @JakeWharton  in Python.  It prints the logs in colored text and we have choice to change the color and choose what we want on our box in the program written. Upon that I can just fetch log for a particular Android application.  Also, we can contribute further to this tool. The readability experience of log on using the PID Cat, is better.  Read about the logcat color script from Jeff Sharkey, it is here.


How to get PID Cat?

On Mac OS X, the PID Cat can be installed via brew.  If brew is not configured on the Mac OS, configure it. On successful configuration of brew, using the below command the pidcat can be configured.
brew install pidcat

On Ubuntu, the PID Cat be installed by using below commands in terminal
sudo apt-get update
sudo apt-get install pidcat

On Windows, install the Python and configure the path. I have tested it with Python 2.7 and 3.x during my tests. It appears to work for me in my test environment.  

Below are the steps to use PID Cat on Windows machine.

  1. Install Python and configure the path
  2. Make sure the path of Python is set
  3. Make sure the ADB is installed and the Path is configured for it
  4. PID Cat is a Python program and it is available in above mentioned Git repository
  5. Get this Python program file and save it locally on the drive
  6. Go the drive where the pidcat.py is saved and enter the command in terminal -- python pidcat.py com.package.name
  7. This should start printing the logs and on Windows 10 OS, I see colored log text

If using the Cygwin on Windows OS, install the Python package and make sure it is installed. Now go to root and access the directory 'cygdrive'.  From here choose the physical partition drive of Windows OS where the pidcat.py file is stored. Then use the below command to fetch log of specific app.
python pidcat.py com.package.name

Note that, for all the above said, the ADB has to be configured and set in path. If not, it will create trouble. The general log of Android device can also be collected via PID Cat, by using command -- python pidcat.py .




Android device logs in colored text in Windows OS via Cygwin


Android device logs in colored text in Windows OS terminal

Now having setup the PID Cat, while it is up and running, let us test and debug the apps much better.

Saturday, April 1, 2017

Web Client Performance: Webpage's Response Time and HTTP Requests -- Part 1


I'm practicing along with my colleague Chidambara, on Web Client Performance Testing (WCPT). While we practice together, we are learning, how to assess the current capability of the web page and then how to interpret the same.  Why I say this, because when I test, I will make a report; that test report as to help one to decide and assist in taking the necessary action if it has to be taken. If not, the purpose of the test report might just bound to know what did I test and what is my interpretation. For this, I need not write the test report.  The same with automation as well, isn't it? We will get the report on running the test suite and this report will be used further to decide what to do based on our interpretation of it. Coming back to WCPT, I happened to learn it this way and sharing my learning with my fellow tester.

I picked a Google India search page and monitored its requests and response. At time of testing, I used Firefox in private mode. I noticed total of 16 requests were fired and close to 1.4 MB data were transferred. Overall it took 4.62 seconds on 100 Mbps bandwidth internet line. Below picture reads the requests fired and its timeline.

HTTP requests fired from Google Search India page















I notice, first two redirection taking total of 294 ms, and then 484 ms to load the html document. In total, to load the html document, 778 ms is used. This is close to a second. If I had to make it to round figure, say 1 second to load the html document. Without html document, the web page will not get loaded and for this reason html document will be the first content to get downloaded on client i..e on the browser.  Notice that there are two redirects with HTTP status code 302.
I will walk through in a blog post how the redirects affects the web client performance potentially. I and Chidambara, practiced the same few days back.

Moving ahead, I see other 3 seconds are used to download the images, stylesheet (CSS) and scripts. This tells, more time is taken to download these elements of the page then the html document. When looked into other web's webpages the same is observed during time of testing.  With that, I learn, these elements are the time consumers if not handled well thereby showing the slow response time. Slower the response time is, i.e. the more time is taken to load the functional webpage.
Now you see, while carrying out the functional test you also test partially for the performance of it, and vice versa.

The 'capability' i.e. performance attribute of the webpage will be -- "how quickly it loaded on the client i.e. what is its response time".

Note that, the expectation is, it should be functional and  the experience of using it should be pleasant to the user; these are the implicit expectations with the words 'capability' and 'response time'.

In above picture, the total time for receiving the html document is 778 ms. But, the event DOMContentLoaded is fired after 1 second and somewhere it looks near to the mark of 1.2 second. Where did the 442 ms go then? On my network bandwidth of 100 Mbps and machine configuration with 8 GB RAM, still this is a question to me. Then what will be in other network bandwidth and client's machine configuration? Why DOM parsing took time?
I see there is JS being received i.e. in 8th request. I'm not sure if is synchronous or asynchronous. Technically where there is no dependencies between the scripts or any other page elements, keeping it as asynchronous helps.
Then there is a CSS being received in last second request. If the CSS isn't received well before, then page loading has to wait for it as the style cannot be defined for the parsed DOM of loaded html document.  It is a thumb rule to keep CSS in the top requests while receiving from the server. But context of the website's design can change this thumb rule.
If this gets tested, can the response time of the Google India search page can come close to 3.3 to 3.5 seconds?

If this is just for one webpage of a website, how many webpages do the product I'm testing, have? Now I understand very much better than ever that, like functionality is coded into the product, the performance as well should be designed and coded in equal importance.  I derive my tests with such thought process as it serves me to be the base in building the tests and for sampling the variation and interpretation of test observations.


What did I learn here?

These are key learning when I happened to interpret the Web Client's performance
  • More the number of HTTP requests, the response time increases for the webpage
    • Lesser the number of HTTP requests, it is good from point view of webpage's response time and performance
  • The html document takes minimal share in the total load time of the web page and rest of the time is taken by the webpage's element
    • Handling and optimizing this is important for better performance of web client
  • More the page weight, more the response time for webpage
    • Need to test and figure out is there any way the page weight can be optimized, so the response time for webpage will be fast
  • If said, 15% of time is for html document of the webpage and then other 85% is for the webpage's element, then the problem area is evident with respect to response time
    • If fine tuned the client for better performance, the experience for targeted user should still be pleasing with better response time
    • Upon this, if server performance is optimized, then do targeted user say, "it is just like this, so fast"? I do not know; but as a practicing tester I see, fine tuning the performance of web client is possible for a tester while she or he tests the website for functionality or any other quality criteria. In parallel, if taken care of server performance based on contextual learning from the tests observation, it will be very much useful.
  • Know this when looked at webpage
    • Number of HTTP requests fired; 
    • time taken by each requests; 
    • what each requests carries from client and back from server to client; 
    • size of the HTTP request transaction; 
    • page weight and contribution by each elements in the webpage; 
    • know when did DOMContentLoad event got fired in the timeline;
      • in above picture it is marked with blue line in the timeline
      • it is fired when the html document is completely parsed and loaded without waiting for other elements of the webpage i.e. css, js, images etc.
    • know when did Load event got fired in the timeline;
      • in above picture it is marked with red line in the timeline
      • it is fired when the page elements are loaded and finished loading
    • overall response time for the webpage;
    • understand what each page element is and why it is needed;
      • it is very much important from aspect of testing to know is it actually needed for the webpage or website
      • if it is not needed, then as a tester I should be in position to advocate my observations out of tests
      • if it is needed, then as a tester I should be in position to advocate my observations out of tests and how it can be tuned and optimized

It is important to know what is HTTP and how the web communicates. So that we understand the influence of bandwidth and latency on the response time. In next post, I will share what I have understood for the same. Later in subsequent will share, how I and Chidambara progressed in learning and practicing of this.