Showing posts with label Tests. Show all posts
Showing posts with label Tests. Show all posts

Friday, February 2, 2024

Deep Link and its Testing via Automation

 

I get these question consistently from my fellow testers and community.

  1. How to automate the mobile apps and web applications using Deep Links?
  2. How to automate the business flows using Deep Links?
  3. How to achieve end-to-end business flows testing on using Deep Links?
  4. How to automate scenarios in mobile apps using Deep Links?
  5. What is the best approach to automate the mobile apps using Deep Links?
  6. What is the best practice to automate using the Deep Links?
And, more questions on same pitch.


No Deep Dive into - What is Deep Link?


A hyperlink in HTML is a kind of deep link within a website or to another website.

Deep Link is known with different names for web, Android app and iOS app.  All these names have the same understanding and intent at some point.

The Deep Links are URIs that takes me directly to a specific part (activity or fragment) of the app that I'm using or testing.  The Deep Link will have an intent which tells where I will be taken on using it.

When we converse on diving deep technically into testing and automation of Deep Link, will share more insights into its internals.



Deep Link and Challenges


This question is discussed with me often:
How to do end-to-end testing using the Deep Link?
Automation of a mobile app using Deep Link poses a challenge which is not experienced in web application.  

One such challenge is, say you have not installed the mobile app.  [This is solvable!]
  • On using a Deep Link, I should be taken to Apple Store or Play Store based on the app.
  • I have to install the app.
    • Post this, in the traditional automation, I should start traversing the business work flows via GUI.
    • Is this adding to the flakiness aspect of automation via GUI?

When we talk so much about flakiness and how to avoid (not prevent), should we exercise business workflows when automating using Deep Link?  What you are thinking?  Let me know!



Scoping of Automation Using Deep Link


Back to the fundamentals.
  • We have to automate, no escape from it.  Let us automate what must be automated!
  • Let us not fall into trap of "Automate everything!"
    • For today, I'm in this mindset and attitude,
  • What we automate depends on the objective or goal that we want to accomplish.
    • Each test should have precise and deterministic goal.
      • A test via automation is not an exemption to it.
      • A test defined in automation should be precise, deterministic and have a single objective - Single Responsibility Principle.

What is the objective of my testing via automation for the Deep Link?  This define the scope and extent of my automation.  This will minimize the number of checks that I do using Deep Link.

The purpose of Deep Link is to take me to specific part of the mobile app.
  • Should I start the end-to-end or exercising the workflow to be included in the Deep Link tests?
    • If included, am I not complicating the testing via automation?



Automation using Deep Link

I ask this question to myself and to my team.
What is the goal of testing via automation using Deep Link?

This question helps me to pick minimal and necessity flow actions.   It has lead and leads me to define minimal tests for Deep Link based on what we want to learn from automation of same.

To me, the purpose of Deep Link is not end-to-end testing.  It's purpose is,

Am I taken to the intended state and data when used the Deep Link?

I have kept the test intent to this.

With this, I have come with tests that has minimal must evaluation and assertion to learn if the app is responding or not to the Deep Link.  This is what the business wants when the Deep Links are created.

The app usage and workflow function is not a problem statement of Deep Link in a general context.

Deep Link is not for end-to-end.  It is to take to you from a point to another point, that's it.


Are you automating using Deep Link?



Monday, January 22, 2024

RAAMA: My Test Discovery Model

 

RAAMA -- I Look at You Everyday!


I have tried to put up one of my Test Discovery models in a conceptual way here with name RAAMA - Refer to, Arrange, Action, Monitor, and Assert.

Maybe this model helps you and your test engineering team as it is helping me.  Use this to your context with addition or subtraction for what you are seeking.

I refer to this RAAMA of me everyday and when I'm testing.  I'm finding the new learning and realization everyday that I was unaware earlier.  My understanding of RAAMA is not same what I had on the previous day.

My understanding of this RAAMA is incomplete and I have made PeACE with it by accepting it.  My understanding is growing and getting better everyday.  I will share a better version of it as I experience it.

Each time I look up to RAAMA and refer to it, I see a new dimension to RAAMA.  The awareness, exposure, and the questions are getting better giving the better realization of what I was ignorant and unaware.  The RAAMA is exposing me to be a better test engineer today than what I was earlier.



RAAMA - I Look at You Everyday!





RAAMA - One of my evolving models for Test Discovery


Note: I have not explained in detail what I mean for each node and its sub-nodes.  I can talk and discuss it with you if you look for it; I'm just one email away to get started.



Friday, November 24, 2023

Test Data is Not [Equal To] a Test

 

Not every release is a critical release!

There are releases which are critical from technical and business interests.  In such critical releases, most of my time is spent on understanding the technicalities of the system and the test data identification on identifying what are the priority tests.  Identify and building the test data takes major chunk of the testing time. This effort has helped me, testing, stakeholders, projects and products immensely.


Test Data != Test

Looks like the word "Test Data Management" has become a buzz and marketing word.  In how the word "Test Data Management" is pushed, to me it looks like the intent is missing for -- how to identify the tests and its test data?

  • If I cannot think of tests, how can I think of Test Data?
    • How can I think of tests, the right tests in the context, and the priority tests out of it? 
      • If I cannot get this, how can I get the Test Data for all these tests?

Before talking of Test Data Management, we need to talk about the intent and goal of the test.  If I have a clarity here, by being aware of the product's technical internals, architecture and its eco system, I can think of test data in a better sense.  For first, the PaaS which provides Test Data Management solution has to push and enforce for learning and mentioning -- What is the intent of the test?

When I know, for what to evaluate and how should I be evaluating it, those test intent will lead me to the Test Data and its dimensions which should be vectoring the tests.  

Test Data is not [equal to] a Test.  Test Data is one of the variance, in fact, it is a multi-dimensioned variance that a test will [need to] have.  While a test has one deterministic objective, the test data will have multi-dimensioned vectors that instruments a test.

Test Data has its role in a test, and, it is a critical role.  Hence, we practicing Test Engineers talk and spend our time on Test Data in equal importance as we give our time to identifying, designing and execution of tests.

Test Data is not a test; but the test data is an expression of the test's intent.  Test Data is one of the byproducts of a test.  Having this clarity is important.

Note this, the test data is not the only expression of the test's intent.  A test will have multiple touch points; these touch points express, advocate and can show the intent of a test.  A test data is also a heuristic as a test is.

 

Test Data Management

The word "management" is underrated and not attempted to understand from context where it is being spoken.  How do this word sound -- "Test Data Leadership"?

In engineering, especially in the software engineering, the word "management" inherently talks primarily about design.  On day-to-day operation, the management designs its strategy and approach to solve the problem and challenges.  

When said "Test Data Management" in software engineering, it is about strategizing and approaching the problem of identifying and categorizing (subsets) the data to test.  A kind of leadership at one layer.  It has its role and critical to the deterministic outcome for a test.

In machine learning, we can see this categorization -- training data and test data.  Then, is training data not a test data?  It is!  The training data is designed [managed] to train and this data as well identifies the problem as the training progresses.

The data which are identified, designed, categorized and collected as Test Data, are well sampled data in the context, for the intent of a test.

To conclude, Test Data Management is one of the correlation in software engineering to solve a problem; it is contextual in how and what tests and testing is executed.  The different teams and organizations has their own way of managing the test data.  How they do it, it is their way of doing it.  We have to look Test Data Management from point of Test Engineering and not as an another buzz word to sell just as a product and business's service.



Sunday, November 19, 2023

MVQT: The Testing and Tests with a MVP's Perspectives


I was leading multiple teams and its delivery in a testing service company.  Then, I came up with this thought -- Like MVP, I also have the MVT (Minimum Viable Tests) for a MVP.

Further, I expanded this thought in my day-to-day practice on tailoring to different contexts. I'm observing that it is applying well to the different contexts when I tailor it to the contexts.  After experimenting it for 10 years, I'm sharing this as a blog post.


What is a MVP?

I take this from Eric Ries.  It looks simple and precise to me.

The Minimum Viable Product is that version of a new product which allows a team to collect the maximum amount of validated learning about customers with the least effort.

I see this technique [and a concept] can be applied to anything when I'm developing.  As a test engineer, I develop the tests and test code in major as part of my testing.  On applying the idea of MVP to my testing and deliveries, I see the value and result.

Reading this blog post of me to know who is a developer.


Testing, Tests, MVP and MVQT

In software test engineering, I see the MVP as Minimum Viable Questioning Tests.


The Minimum Viable 'Q' Tests (MVQT) for a focused area of a feature [or to a feature]

  • Helps me to identify the priority tests that should be executed for first
  • Allows me to learn information on priority which matters critically to product and stakeholders
    • So that a informed decision can be made.


The Q in MVQT stands for "questioning".  I read it as Minimum Viable Questioning Tests.  I see the "Q" as a placeholder for the Quality Criteria.  That is, MVFT means Minimum Viable Questioning Functional Tests to a feature or a workflow.




The MVQT are key to know:

  • Have I identified and designed the priority tests?  How do I know that I have got them?
  • Did stakeholders get the information which they wanted to know on priority?
  • Did MVQT help me to
    • Explore and know what I wanted to know about a feature or a workflow?
      • How fast I was here to know and learn this?
      • How did I develop my tests incrementally?  Did I?  If not, then, is it a MVQT?
  • Did MVQT help me to know
    • Am I aligned and in sync with expectations of my stakeholders and customers who are using the software product I'm testing and automating?
  • Did the MVQT help me 
    • In collecting the critical information in a given context for the scope of testing and automation?
    • Do the learning and outcome from this MVQT help to reinforce the validated learning of customer?
  • Do MVQT result support the outcome of Unit Testing result?

The tests in MVQT has to be consistently revised and evaluated to keep it as a MVQT.  Note this, not all tests are MVQT.  If the number of MVQT is growing to a part of feature or to a feature, it is time to think about what is MVQT for you.

The "minimum" tests are highly effective and it helps me learn and test better technically and socially.



MVQT and Testing

  • Sanity or Smoke Tests
    • The set of MVQT which helps me learn can the build be taken further testing
  • MVFT - Minimum Viable Questioning Functional Tests
    • Apply this to a feature or a workflow or to that part which can be evaluated with minimum tests for its functionality
      • To update is this aligning to the validated learning of customer [stakeholders]
  • MVPT - Minimum Viable Questioning Performance Tests
  • MVUT - Minimum Viable Questioning Usability Tests
  • MVAT - Minimum Viable Questioning Accessibility Tests
  • MVTxT - Minimum Viable Questioning Tester's Experience Tests
  • MVST - Minimum Viable Questioning Security Tests
  • MVAF - Minimum Viable Questioning Automation to a Feature
  • MVLT - Minimum Viable Questioning Localization Tests
  • MVUIT - Minimum Viable Questioning UI Tests

You add more of this to your list and context.

In a way, MVQT should ask and look for the testability, automatability and observability.  If this is not happening, then there is no possibility of saying I have got my MVQT.

More importantly, in the CI-CD ecosystem, MVQT pays a major role.  If I should have my tests in the  CI-CD pipeline, then, the MVQT is the way and it focuses on a targeted area to evaluate it.  Else, it is hard, impractical and not possible to test in CI-CD eco system by delivering continuously.


Ask and Review for MVQT

Ask for MVQT, when you review these:

  • test strategy, test framing, test design, test ideas, test cases, test plan, test architecture, test engineering, testing center of excellence, and test code

For example,

  • What is the minimum viable questioning performance tests that you have got to test this feature?
  • What is the minimum viable questioning performance tests that you have got to test this workflow?
  • What is the minimum viable questioning security tests that you have got to test this feature?
  • What is the minimum viable questioning GUI tests that you have got to test this feature?
  • What is the minimum viable questioning contract tests that you have got to test this end point?
Likewise, What is the minimum viable questioning automation tests that you have got to test this feature?

Ask how these tests qualify as MVQT in this context of testing and automation?

This should help you to see how effective is the test strategy in a given context.

Importantly, the MVQT and its effectiveness is a testability to test your tests.



The Credit is to Me

I'm not sure if the idea what I'm saying here in this blog post is practiced by other test engineers.  I have not seen this being discussed about it in public forum.  I have not come across it in my awareness and to the exposure I have put myself.

Hence, I will take this credit to me.  Giving the credit honestly is not a common sight and practice.  I have not got my due credits for using the ideas, thoughts and work that I have come up with.

So, I make it as a open letter and call out that credit for this idea, thought, concept, and practice will be to me when you listen, use and practice it.