Showing posts with label Tools. Show all posts
Showing posts with label Tools. Show all posts

Sunday, February 25, 2024

Backtracking of Testing, Security and Tools

 

When I started my software testing career in 2006, I was in this thought -- What tools should I use, so that,

  • I can do the testing that is sought after
  • I can test for performance
  • I can test for security

Moving from a search for tools to building the mindset and attitude.  It is a journey!  It took me time to see this journey.  I hopped on to this journey in 2011.  I see, this is not an ending journey, while I know where should I go and reach.  I'm on this journey.

I had no mentors.  I had no seniors in software testing to guide and discuss on my thought process.  I had developers (programmers) who had little or no interest in testing; so it did not matter to them.  But, they have helped me to be better tester.  I'm grateful to them.  Then, the community was not so connected, organized and share the knowledge as it does in 2024.  The software testing was not considered or seen as a technical activity, then.  I have stood, fought, demonstrated and delivered my testing as a technical activity.  I'm continuing it.


Today, on 24th Feb 2024, I read the below question in a community's social space and decided to write this blog post.
Hey, everyone .... Can anyone please suggest a good tool for API security testing?

This question resonates in test engineers.  Most of we test engineers still look and ask for tools when it comes to security testing.  To test engineers, the performance and security testing are still a conception and activity with tools alone.  In reality, it is not!  If you are in such thought or you come across such question to answer, this blog post is for you.


Backtracking the Problem Identification

In programming, we have an approach by name Backtracking.  It is about exploring in possible ways to find possible solutions for a problem.  And, a best solution which works in context is picked.

What's the problem here?  Testing, Security and Tools. Are you with me so far? Let us backtrack this problem.

NoteI see a difference between the words 'possible' and 'all'.  Hence, I use the words "possible ways" and "possible solutions" and not "all ways and all solutions".


Bounties and Entry

There are reputed bug bounties for security testing.  To get into this bounties one has to showcase her/his discoveries and skills with her/his recognized portfolio.

The tools are accessible to all.  The community edition and licensed edition tools are available.  We use these both editions of tools.

  • But, why not all of us with tools cannot get into such invited security bug bounties?  
    • You will answer this question if you ask yourself.  Hope this backtracking should have helped by now!

The Security Engineering is a vast practice area in Software Engineering. There are dedicated security engineers in role.  But, we test engineers can take up the testing for the security of software systems which the team is programming and building.

I advise, a practicing test engineer
  • To start with building an interest for security engineering.
  • Consistently hone and build the mindset, attitude and skills needed for the testing the security aspects.
  • Pick simple problems, solve it.  Do it consistently, while you explore the layers.

While this is done consistently, it is time to find the mentors in Security Testing. The mentors will assist you in practicing how to test effectively for security making use of simple contextual necessary tools.  Also, a mentor will let you know how to test for security without tools to an extent.  The tool is effective when known how to use it.  The tools help immensely only if I can test for security. 

To backtrack in a different perspective, did any tool that you use, find a P1 security problem [or risk] by itself in its scan?  Did your programmers acknowledge to that risk or problem?  I will pause with these two question to you.



Today, my testing for security is confined to systems that I test.  I test for web application, mobile apps, web APIs, and database.  I can assist here, if you do the home work and ping me.



Saturday, February 3, 2024

Performance Testing - The Unusual Ignorance in Practice & Culture

 

I'm continuing to share my experiences and learning for100 Days of Skilled Testing series.  I want to keep it short and as a mini blog posts.  If you see, the detailed insights and conversations needed, let us get in touch.


The ninth question from season two of  100 Days of Skilled Testing is

What are some common mistakes you see people making while doing performance testing?  How do they avoid it?


Mistakes or Ignorance?

It is mistake when I do an action though I'm aware that it is not right in the context.

I do not want to label what I share in this blog post as mistake.  But, I call it as ignorance despite having or not having the awareness, and the experience.

The ignorance said here is not just tied to the SDLC.  It is also tied to the organization's practice and culture that can create problems.

To this blog post's context, I categorize the ignorance in these categories -- Practitioner and Organization.

  1. Practitioner's ignorance
    • Not understanding the performance, performance engineering, and performance testing
      • When said performance testing, taking it as - "It is load testing"
      • No awareness on what is performance and performance engineering
        • Going to the tools immediately to solve the problem while not knowing what is the performance problem statement
      • Be it web, API, mobile or anything,
        • Going to one tool or tools and running tests
      • No much thinking on how to design the tests in the performance testing being done
      • Ignoring Math and Statistics, and its importance in Performance analysis
      • No idea on the system's architecture, and how it works
        • Why it is the way it is?
      • The idea of end-to-end is extended and used in testing for performance and having hard time to understand and interpret the performance data
        • How many end-to-end your tests have identified?
        • Can we test for performance to all these identified and unidentified end-to-end?
      • Relying on the resource/content in internet and applying or using it in one's context without understanding it
      • No idea on the tech stack and how to utilize the testability offered by it in evaluating the performance
      • Not using or asking for testability
      • Getting hung to most spoken and discussed 2 or 3 tools on the internet
      • Applying tools and calling out it as performance testing
      • No attempting to understand the infrastructure and resources
        • How it impacts and influences the performance evaluation and its data
      • Idea on Saturation of resources
        • Thinking it as a problem
        • Thinking it as not a problem
      • Not working to identify where will be the next bottleneck when solving a current bottleneck
      • What to measure?
      • How to measure?
      • When to measure?
      • What to look when measuring?
      • Not understanding the OS, Hardware resources, Tech Stacks, Libraries, Frameworks, Programming Language, CPU & Cores, Network, Orchestration, and more
      • Not knowing the tool and what it offers
        • I learn the tool everyday; today, it is not the same to me compared to yesterday
          • I discover something new that I was not aware of what it offered and exist
          • I learn the new ways of using the tool in different approaches
      • No story in the report with information/image that is self-describable to most who reads it
      • And, more; but the above said resonates with most of us
  2. Organization's ignorance
    • At the org level, for first and to start, it is ignorance in Performance Engineering
      • Ignoring the practice of performance engineering in what is built and deployed
      • Thinking and advocating, increasing the hardware resources will increase and better the performance
        • In fact, it will deteriorate over a period of time no matter how much the resources are scaled up and added
      • Ignoring the performance evaluation and its presence in CI-CD pipeline
      • The performance tests on CI-CD pipeline should not take beyond few minutes
        • What is that "few minutes"?
      • Not prioritizing the importance of having the requirements for Performance Engineering

Recently, I was asked a question - How to evaluate the login performance of a mobile app using a tool "x"?

In another case, I see, a controller having all HTTP requests made when using web browser. Running these requests and trying to learn the numbers using a tool.


I do not say this is wrong way of doing.  That is a start.

But, we should NOT stay here thinking this is a performance engineering and that is how to run tests for learning a performance aspect[s].


To end, the performance is not just - how [why, when, what, where] fast or slow?  If that is your definition, you are not wrong!  That is a start and good for start; but, do not stick on to it alone and call performance.   It is capability.  It is about getting what I want in the way I have been promised and I expect; this is contextual, subjective and relative.  The capability leads to an experience.  What is that experience experienced?

Sometimes, serving the requests by what you call as slow, is a performance.  What is slow, here?

The words fast and slow are subjective, contextual and relative.  It is one small part of performance engineering.

That said, let me know, what have you been ignoring and unaware in practice of Performance Engineering & Testing?


Sunday, April 26, 2020

Web: Debugging the Errors using Browser's Utilities



My curiosity raises when I come across problems that has no first-hand clues.  I will get in and start debugging it, looking around what is available for first.  Here is one such case, which was mentioned in Facebook Group of The Test Tribe community.  It had a screenshot of a browser (not sure which browser) with URL and a error message for a untitled tab -- "This webpage was reloaded because a problem occurred."


I can't guess what went around there!  But I see, there should be a reload of page again.  The reason for it can be anything.  But I will take two into considerations at least to start:

  1. Browser crash, and 
  2. DOM failed and paint did not happen (for 'n' reasons, which is not know at time of witnessing it).

My questions to start looking for insights:

  1. How will I know what actions did I make with browser?
  2. How will I know what was loaded and not loaded?
  3. How will I know if there was any errors in loading web page?
  4. How will I know if there was a crash in browser? (Assuming this is a rare case, but it cannot be denied. I have witnessed browser crash when I tested for cross browser compatibility of web application)


Typically I prefer the browser in this order when I want to assess the event actions and performance -- Chrome, Firefox, and any other browser.  Note that, how the event actions and performance are handled differs from browser to browser.

Opening below Chrome URLs in a tab of Chrome, and in another tab using the web applications records and collects information that can be useful to debug.  That way I don't have to make note of my actions in parallel, while I test in this context:

  • chrome://user-actions  (This records what actions I'm doing with my open browser window instance)
  • chrome://history   (This shows the places I have visited)
  • chrome://net-export  (I use this when I actually need it to debug much deeper, else I will not enable it.  This can have influence on the performance of a browser when enabled.)
  • chrome://crashes   (lists the crashes of browser)


Make note of what you are collecting in file when used net-export utility in Chrome.  The credentials and private can get recorded as well in a file.  If this is to be shared with someone, know the risk of doing it.  Likewise, in Firefox, referring the "about protocol" available is useful. For example, about://crashes

To know if the resources was requested and was it loaded or not, below commands in console of Chrome and Firefox, helps:

  • performance.getEntriesByType("resource");
  • performance.getEntriesByType("navigation");


I have not found IE giving this detailed information. Or, I'm not aware of it.  If you are aware of it, please share.  I have not debugged much in Safari in recent times.  Usually, the technical heuristic is if "looks to work" in Chrome, "more likely it looks to work" in Safari.  Need to look at the CSS particulars, in specific with Safari.

These are few things which I will have to keep pre-setup, before I test web applications in browser.  Possibility of getting the required information is high if had this setup -- to debug and report bug with tech details.

In IE, I will debug from the point where the error occurs by following the stack trace details.  When done together with a programmer, it helps very much mutually.  Apart from above said, searching in web, I found several reasons stated for this error from cache to invalid time in system where browser is being used.  I will cross check quickly for them as well, while I have this pre-setup done and collecting required information.