Sunday, April 26, 2020

2020: It Is Costliest If Not Shipped Quickly - Fixing few bugs later is okay!




When you read this post, I assume you are working in a team or organization which has imbibed Agile and its various approaches into practice.  I see it is more mindset and culture, than just saying it as a practice.  I have worked in team and projects which had Waterfall approach. I'm working with teams and projects which claims to be Agile in approach.

Here is my tweet and the question of @J_teapot.  His tweet is interesting and striking to me.  Hence, I write this post.  Thanks, @J_teapot for asking.  I enjoyed that question of yours.



Days of Waterfall


It was in early 2000s (to be precise 2005-2007), I worked in project which had customized waterfall approach.  The build to testing were delivered in iteration and not at the end.  In the course of project, it got adapted to Agile practices and as I remember it was around 2007-2008.  Yet the shipment of product being developed happened once in 3 or 4 months.  This shipment time remained as is Waterfall and in Agile.  If asked, why that slow in shipment when Agile, it was business decision.

In my initial days of career, I read this in text books of Software Engineering and Software Testing -- "Bugs are more costly to fix when found later."  The same was said by managers to we programmers and testers.  Those were the days where no much blogs and posts in web on Software Testing, Software Development and Practices.  Google, had just come in few years back, then.  It was the period where Software was getting into every spaces of business and daily life.  The pace of delivering software to various business space picked up its pace.  The technology platforms emerged and so the practices.


Days of Startup Era


In the era of technology start-up that is around 2012 and onwards, the shipment pace for "working" piece of software took cut from months to a month and to a week.  Then the era of programmers led technology start-up came in.  The idea of MVP (minimum viable product) and pitching to investors for series of funds, all came into the news.  If you notice, software was developed earlier and, in this date, as well.  The difference is the time and how it shipped, was the business call in major than being a tech call.  I see this is appropriate to remain in business, competition and for being relevant to needs of customer and market.  That change has to come in quick pace.

In 2020, this is no different.  In fact, the shipment pace has got much quicker and it goes in a sprint of 14 days (including non-working days).  You can refer to the train model of backlogs in a sprint for more details.  If did not get deployed and rolled out for install, business will be impacted as earlier.  Who answers for the investors?

The approach of software development has evolved and so the testing also has to adapt.  If had a "working" piece of deployment and serving the customers with value claimed, the team will be relieved; else, the trouble starts for team.  I have been here and I know it.  The MVP which claims to deliver the mentioned value, has to be delivered.  Whatever one builds or adds to MVP, should continue to deliver value claimed.  If there are bugs in shipment that can be bearable, then still fine unless it blocks the value claimed.

I have been in state, where the value claimed by business and MVP, not delivered after shipment and deployment.  The are several reasons as multiple works to ship one working software system.  Fixing that in few minutes and deploying it is expected besides the cost incurred out of it.  It so happens, the deployment and roll out goes while the testing is in progress or not yet started for that version of software.  In a way, it is a strategy and also a business call.


Ship it first - Mindset


In 2020, it is costliest if not shipped quickly; fixing few bugs later is okay.  That's a business call and a strategy.  We have to be there on time with service catered.  In my opinion, unless the value claimed to offer by MVP and product as a whole, is being delivered, all is in control despite the bugs in production.  That said, I'm not ignorant of bugs in production.  I will continue my work in finding them.  I have evolved to adapt my testing and skills to need of business, market and software system we build.

If asked what is right that is shipment once in months or once in a sprint (2 weeks, usually), both are right.  I hear few teams still ships once in 6 months today as well.  I work with teams which ships in a week and in two weeks.  Not to mention, I have worked where the requirement comes from business and shipment happens in few hours.  It is all about, where I'm here.  That as well defines the time available for testing.  I'm not sure what today's Software Engineering and Software Testing books say about fixing bug later.  Fixing later is not a habit but a situation needs today when it comes to business.  I wish, when a testers being technical and understands the business side, it adds value much more to the testing, team, business, organization, customers and to the product.

To summarize, it is business and market demand that has kept the shipment in a sprint.  Prioritizing the blockers (and bugs -- know & unknown) for shipment is a skill which a tech (programmers & testers) team has to groom in my opinion.  I see this is adapted in service industry and as well in the tech product organizations.


Web: Debugging the Errors using Browser's Utilities



My curiosity raises when I come across problems that has no first-hand clues.  I will get in and start debugging it, looking around what is available for first.  Here is one such case, which was mentioned in Facebook Group of The Test Tribe community.  It had a screenshot of a browser (not sure which browser) with URL and a error message for a untitled tab -- "This webpage was reloaded because a problem occurred."


I can't guess what went around there!  But I see, there should be a reload of page again.  The reason for it can be anything.  But I will take two into considerations at least to start:

  1. Browser crash, and 
  2. DOM failed and paint did not happen (for 'n' reasons, which is not know at time of witnessing it).

My questions to start looking for insights:

  1. How will I know what actions did I make with browser?
  2. How will I know what was loaded and not loaded?
  3. How will I know if there was any errors in loading web page?
  4. How will I know if there was a crash in browser? (Assuming this is a rare case, but it cannot be denied. I have witnessed browser crash when I tested for cross browser compatibility of web application)


Typically I prefer the browser in this order when I want to assess the event actions and performance -- Chrome, Firefox, and any other browser.  Note that, how the event actions and performance are handled differs from browser to browser.

Opening below Chrome URLs in a tab of Chrome, and in another tab using the web applications records and collects information that can be useful to debug.  That way I don't have to make note of my actions in parallel, while I test in this context:

  • chrome://user-actions  (This records what actions I'm doing with my open browser window instance)
  • chrome://history   (This shows the places I have visited)
  • chrome://net-export  (I use this when I actually need it to debug much deeper, else I will not enable it.  This can have influence on the performance of a browser when enabled.)
  • chrome://crashes   (lists the crashes of browser)


Make note of what you are collecting in file when used net-export utility in Chrome.  The credentials and private can get recorded as well in a file.  If this is to be shared with someone, know the risk of doing it.  Likewise, in Firefox, referring the "about protocol" available is useful. For example, about://crashes

To know if the resources was requested and was it loaded or not, below commands in console of Chrome and Firefox, helps:

  • performance.getEntriesByType("resource");
  • performance.getEntriesByType("navigation");


I have not found IE giving this detailed information. Or, I'm not aware of it.  If you are aware of it, please share.  I have not debugged much in Safari in recent times.  Usually, the technical heuristic is if "looks to work" in Chrome, "more likely it looks to work" in Safari.  Need to look at the CSS particulars, in specific with Safari.

These are few things which I will have to keep pre-setup, before I test web applications in browser.  Possibility of getting the required information is high if had this setup -- to debug and report bug with tech details.

In IE, I will debug from the point where the error occurs by following the stack trace details.  When done together with a programmer, it helps very much mutually.  Apart from above said, searching in web, I found several reasons stated for this error from cache to invalid time in system where browser is being used.  I will cross check quickly for them as well, while I have this pre-setup done and collecting required information.



Wednesday, April 22, 2020

What Structure Does a Test Have?



I was an audience in webinar from Manju Maheswar on Heuristics.  This webinar led me to discussion with Klára Jánová on how the heuristic will help in having structure for problem solving.  Here are my tweets as my discussion.

There is a question from Klára Jánová -- "Thanks! What structure does a test have?"  In my opinion this question goes down to fundamental and philosophical level of Testing.  When spoken in casual, the outcome of a test is seen in binary that is pass or fail.  Is that right or not so right, that's altogether a different topic of discussion.  I will not get into it.  But that binary is associated as result to a test and it has an experience attached to it.  That "experience" attached to the test is an "essence of test" which I would like to bring here in this post.  In simple, if a bug is an experience that I encountered in using the product, then test is an experiment to know what is that experience.  The test exposes a tester or anyone to an experience with information, as an outcome.  How we act upon this experience and information on witnessing it, is what tells the further story.

What we feel out of an experience is the shape or structure we give to it.  That said, the test has a structure to it as an experience. This experience will let us to respond further rationally in a structured and organized way to learn further by debugging.

Apart from the said above, there are much more elements that adds structure to the thought of a test identified. The picture shared below will give the gist of elements which fine tunes a test to be precise, deterministic, practical and an experiment with a question.



Above all these, a test is a heuristic.  That means, an experience is as well a heuristic.  So, the software is a heuristic having sets of experiences to its users.  The software has a structure in multiple forms.  The functions (methods), classes, packages (modules) and the data structures used gives the structure internally to a software.  How the product is built and interfaced gives the external structure.

Now, if I happen to talk in this philosophical tone may be not all take it seriously, is what I believe.  I can understand it and nothing wrong in it.  That’s an experience too!   I will have to communicate and talk how the team with which I work communicates, you see that’s a context.  Usually the interests will be in -- binary that is pass or fail; the artifacts which is by-product of testing activity (test cases, bug reports, etc.); the tools and more.  I will have to work in that mode in those contexts.  A function (method) and class written will have yield to an experience from what it does.  Yet I hope there will someone in programmer, manager and in software engineering, can relate to experience part of a test.  When I talk to testing practitioners who speaks the language as this, I talk to them as this.  That's the context of people and practices, again.

Okay, how's the experiences are structuring for you from the encounters with the tests you executed and from the code you wrote?



Wednesday, April 15, 2020

HTTP Status Codes and Error Codes: They are not the same!



To write in brief, I understand the HTTP Status Code and Error Code are two different topics when it comes to APIs.  Most often, it is confused that HTTP Status Code is the error code.  No it is not!  Then why it is taken that way?  Could be because of 4xx and 5xx which inform about client error and server error. May be this has lead to assume or take the HTTP Status Code as error code.  If you are taking it that way, fine!  Don't do it from now on and if did, it is not right.  

The 4xx series of HTTP Status Code tell the user (that is client) about the error that occurred from the client input or from the interaction of client.  Likewise, the 5xx series of HTTP Status Code tell the user about the error which occurred at server end when processing the client input or client interaction.  For example, HTTP Status Code 404 in response by server says, the resource being requested by client is not found.  The HTTP Status Code 500 in response by server says, there was an error at server end in processing the input or interaction of client.  For more details about the HTTP Status Code refer here -- https://www.restapitutorial.com/httpstatuscodes.html



Then what is an error code?

Say, you were trying to authenticate yourself to server in a request. The authentication fails and the server returns HTTP Status Code 401, which means unauthorized.  Client when received this HTTP Status Code, it says to user about failure of authentication.

This is not over yet.  Today's microservices are so agile, scalable and adaptive, it can tell clearly what went wrong if well implemented.  Then can't microservices tell why the authentication failed?  It can, if we did implement that.  What was incorrect during the authentication activity?  If this is identifiable and can be said precisely, it helps the user to correct and attempt again to authenticate, right?  This is where the "Error Code" come in handy!

For example, for the HTTP Status Code 401 that is unauthorized, there can be multiple reason. Few reasons as to mention here -- incorrect user account, incorrect password, incorrect auth token, etc.  Now when server responds just by status code 401, will it help?  Yes it will; but can we derive much more precise help?  Of course, we can and it is by defining the error code and it's message for such failures of 401.  Refer below example.

Incorrect User Account HTTP Status Code: 401
Error Code: 1001
Error Message: Invalid user account 
Incorrect Password HTTP Status Code: 401
Error Code: 1002
Error Message: Incorrect password used to authenticate 
Incorrect Auth Token HTTP Status Code: 401
Error Code: 1003
Error Message: Incorrect auth token used in authentication

If you had observed above, all the actions yields back 401 response from server. But to tell precisely what happened, services will make use of defined error code and error messages.  When the client receives this agreed error code message in response as a contract, it displays appropriate message to the user.

Further, client will be programmed with this error code in processing the response from microservices.  Based on HTTP Status Code and Error Code payload received, it acts accordingly.

Here is an example HTTP response with status code 401 and an error code payload:

HTTP/1.1 401 Unauthorized
Content-Type: application/json
Content-Length: 123
Connection: close
Date: Sat, 11 Apr 2012 15:04:31 GMT
Access-Control-Allow-Credentials: true
Access-Control-Allow-Methods: GET, POST, PUT, DELETE, HEAD, OPTIONS, PATCH
Access-Control-Allow-Origin: *
X-Request-Id: Req-87c-96fa-e65e6efcbcde
X-Trans-Id: abcdefghijklmnopqrstuvwxyzn0=
X-Transaction-Id: Txn-41cb7c71-b123-504f-c206-a52d651c

{"code":"1003","status_code":401,"header":"Unauthorised","message":"Invalid access token used"}

The client will look for the HTTP status code in header and in payload, along with the error code and message.



What tests can be done here?

The tests of all quality criteria can be done here. It needs to be well thought, modeled, and designed.  Also, the important test which missed here is the contract test between the client and server for defined error code.