Today's software is not like the software and hardware what I saw in 2006 to 2010. I see the software which we are building today, it integrates third party software vendor's SDKs. As well, today's software what we are building are not desktop application like in the 2000s; but the applications on the desktop and mobile which can connect everywhere else and serving the user.
Recently, I was testing a product which is in its initial state of MVP. It integrates four SDKs from different software vendors. Among these, one SDK showed a behaviour which was unusual and the engineering team which I was part of, raised the request for support from that SDK's vendor. While the behaviour what I observed in the product was uncertain and over a period of week, it took a stable behaviour. When asked, I was said it is due to SDK.
Here is the miss, what I did for first. I took those words as I had got used to this behaviour for two weeks with no technical solution. I did not question enough that and debug around as the time what I had for testing was also crunched. I and other fellow tester owned the ownership of testing for the release. Stretching late nights and weekends, it all drove me to take words when said it is technical limitations and nothing can be done. Here is the second miss what I made -- I took these words as the behaviour due to SDK and previous experience was consistent and synching very well.
The story got different in production. One of the user reported the same behaviour and I was confident in my reply and said, it is as a known behaviour. My other programmer friends had same opinion. My programmer friends here are highly skilled and they know their job very well. But we all were fooled by the software as the pressure which eventually starting building upon us. That was also due to the experience of us talking with that SDKs team.
I took up that behaviour for investigation again today after hearing from production and from a techie friend who said, "it should not happen for so long". Yes, it should not happen for so long as I expect it should be gone after the synch is done, is what I think as well.
I started questioning myself and doubting what I have learned. Isolated all the environments for my observations and started watching my actions, the requests, the responses, the pay load, the race conditions, the served and unserved data streams, the logs, and the breakpoints & values in code eventually as I did all these moves.
I was wrong! Yes, I was wrong in what I said confidently to every stakeholders right form CEO, CTO to all other business stakeholders. Also the engineering team had the opinion what I had. It was time to go back and tell them, "I'm wrong in what I communicated in behaviour of the client software and rational behind it".
I gave the reason why it happens, when it will happen, when it will not happen, the impact of the behaviour, work around for same in production, is it dependent on anything else and awaiting, and why it was a missed from our end technically. It was not a problem from third party's SDK. It was a UI that got triggered each time when the activity got triggered.
Though I have to investigate much lot for other problems in using that SDK, this behaviour is not due that SDK. It did work well in this case here.
What I want to say here?
- No matter how confident in the code and in tests, we will be fooled and are fooled each time
- There is no harm in doubting ourselves for second time on -- what we have listened; what we have ignored; what we have learned; what we have not carried technically while we observe the behaviour from the tests consistently
- Being non-technical in our observations and work helps a lot
- Knowing the benefits of not thinking technically each time
- Giving time to ourself so we sit back and investigate the behaviour which everyone claims it is due "that"
In this case I see no impact to user in terms of data and performance of product, but the experience is definitely annoyed.
- If it is due to that, you will happen to learn about it so you can test it better
- If it is not due to that, then you will happen to learn it is not so and can figure out the cause of it; help team in fixing it
So I learn again, that I should build the tests technically -- which will put the product into test for the experience of each feature; each UI; each actions of user on a UI; each interactions with backend and its outcome to the client; and evaluate the outcome of the same.
How will I do this? I can do this sitting in person and also with help of automation. But the point is what is the scale where I have to be with my tests samplings to experience it and investigate further. This takes time to learn and it won't happen in crunch time of releases. Having said this, it is not bad to go and talk with stakeholders and buy the time quoting the priority and impact of the behaviour. We testers will have to initiate and assist the stakeholders in learning when there is a need for it.