Showing posts with label Research. Show all posts
Showing posts with label Research. Show all posts

Tuesday, May 20, 2025

Are We Not Innovating, Then?

 

This last Sunday, I met a friend.  In our conversation, he brought up a topic between him and his colleague.  

He said, there is no innovation happening in Software Engineering.  Instead, he see the apps development to cater business services to people.  That is, in other egineering industries there are innovation being done along with research and development.  But, he did not see anything happening in our Software Engineering and communities.

I asked him, "What makes you not to see the Hackathons [being projected] as an innovation initiative followed up with further research and development?"

He did not see the hackathons as a space for innovation.  His experience is, the organization hosts Hackathon for two or three days in a year.  

Further, he said, "We are asked to participate in the Hackathon and also to work on stories in parallel.  If the stories are not worked in these two or three days, it will be escalated.  And, the ideas and outcome of the hackathon are left as is.  Hardly one or two ideas are picked for a proof of concept work in the project."   

I feel, maybe, what he is saying make sense, as I have witnessed it.


Are We Not Innovating, Then?

I see, there are innovations happening in the Software Engineering.  But, these innovations are not something that common men can consume directly.  But, these innovations are consumed indirectly by the common men.

The innovations that I see and test are not B2C.  Sometimes, it is indirectly B2B via D2D.

Yes, D2D -- Developer to Developer.  I'm not sure if there exist a word Developer to Developer.  This is what I said to him -- D2D.  Maybe for this reason, the innovation and software engineering's problem solving and solutions do not come to the discussion and spotlight.

As an innovation byproduct in the software engineering, there are frameworks, libraries and artefacts developed by the developers of an org [and communities].  This is being consumed by the other organization's developers in their project.  As an outcome, there is a solution being built [using an innovation] and delivered by a business which is consumed by a common men.

Maintaining these frameworks, libraries and artefacts outside the payroll job time is a challenge for any developer.  But, some do it beating all the hurdles they experience.  There are challenges here when it comes to maintaining such projects by collaborating with software engineers from the communities.

Most of my research and development outcomes in the Test Engineering area are consumables of we developers and not the common men.  And, it is not known to all the  developers.  When I say developers, it includes, programmers, test engineers, DevOps and product teams.


To conclude here for now, might be the software project you are working on is also consuming an innovation that you are not aware of.  Talk to your programmers, test engineers and teams.



Tuesday, November 28, 2023

Behind the Every Test Data, There is a ?!

 

Read this blog post to have a perspective about the Test Data and Test Data Management.  The point is, if I'm not aware of a test and what does it tell me to explore, I cannot think of a Test Data.

That said, if I know what I should be evaluating as part of performance, why, when and how, this will help me to come up with a thought for identifying the tests and its test data for the same. 

The ninth question from season two of 100 Days of Skilled Testing is:

What role does data management play in performance testing, and how do you ensure the availability of suitable test data sets?


Testing and "Ensure"

We test and have tests in testing, because, there is no "sure" and "ensure" idea in software.  But, we presume on a rational basis upon, "if, these are this", in a given context when the software processes.

Now, ask yourself, how can we ensure the availability of suitable test data sets?

In my opinion, the Test Data is often misunderstood.  This is the primary problem and should be the first problem, when asked "what are the challenges in creating the test data?".

When you read the concluding lines of this blog post, you will learn why I say this.


Test Data and Immunity

In my opinion and experience in practicing the Test Engineering, I see, the Test Data should be a viral strain and it should have its variants.  When this test data is used to test [experiment, test investigate, and debug], how do the software and its ecosystem respond?

  • Does the software and its ecosystem is immune to this test data?
    • Does it exhibit any risks and problems?
      • If yes, then, do the purpose of my testing and automation is accomplished with this test data?
This puts me back to question, what is the purpose [intent] of my test?  It drives me to derive the test data which helps me to know -- What am I supposed to learn and on priority?  With this, I get an idea for what kind of test data I should be creating knowing its pattern.

If the system is immune to Test Data and not reveling anything new in the context, I classify this pattern of test data as "Immune" to the context.

In my practice and research work in Test Engineering and Software Testing, to start, I categorize Test Data into two areas.
  1. Immune
  2. Not Immune
Further, I have categories, under these two, where I classify the Test Data deterministically for the context.   Get in touch if you want to learn more about this.  I'm just one ping away!

The tests should help me to evaluate for the immunity and also non-immunity; both are essential and necessity.  

The credit is to me for such classification of Test Data.  It is my research work out of my practice.

Note that, Test Data is not just the input [characters or files] entered or given to a system.  Test Data has its association to tech stacks, infrastructure, ecosystem, business workflows and people.  To craft such Test Data, one has to have the understanding of the system and its internals, and, the problem it solves by knowing how it solves.



Performance Testing and Test Data

  1. What is that I'm testing as part of performance?
  2. What do I want to evaluate in the name of performance?
  3. What part of the system is evaluated for its performance?
    • Should I evaluate this in isolation or as a wholeness of the system?
  4. What domain knowledge and information I should have when testing for performance?
  5. What system's architecture and internal details I should understand and be aware to test for performance?
  6. Is this the first delivery?  Or, do we have this system running in the production?
    • If it is first delivery,
      • How will I create the test data to suit the consumers of this application?
      • What are the key workflows of business that we should be evaluating for its performance?
      • Do all workflows and sub-systems need the evaluation for performance, and on priority?
      • How do I map the fragmentation of users and their data [with its patterns]?
      • What are the infrastructure and ecosystem characteristics that should be part of the test data identified?
      • Does caching have any effect if the same pattern of data is used?
    • If it is a running version in production
      • Can I refer to the DB to figure out the pattern for the particular workflow that I'm evaluating?
      • How can I match the test data to have the production data's characteristics and attributes?
  7. What is the backup strategy for the Test Data?
    • How do I version control the Test Data?
    • Which version of the Test Data I should be using?
  8. What is the threshold I'm targeting with Test Data?
    • What should be the size of the data in DB when I make the IO and RW operations?
    • What should be the network capability when I make the IO and RW operations?
    • What should be the hardware capability when I make the IO and RW operations?
    • What should be the geographical traffic and its pattern when I make the IO and RW operations?
    • More of such factors will be considered when identifying and deriving the test data.
  9. What is the client error yielding Test Data that I should have for the workflow?
  10. What is the server error yielding Test Data that I should have for the workflow?
  11. What is the redirection yielding Test Data that I should have for the workflow?
  12. What is the no-response and no-change Test Data that I should have for the workflow?
And, more.  It is simple; get in touch to discuss and know beyond the listed.



To conclude and stop here, all these questions, do not ensure or assure or make sure that I will have test data for evaluating a characteristic of performance.
  • It helps me to know:
    • What are the tests I should be doing?
    • What kind of preparation I should be having in my practice to create the Test Data for these tests?

The, Test Data should challenge the available Testability and its limits.  If it is not doing, then, we are having a test data no doubt about it; but, it is of shallow. Shallow!?

One has to ask self, "Is this sufficient enough and effective Test Data for the system [and workflow] I'm testing?"

The, Test Data should drive the engineering team to add more layers of Testability into the system.




Sunday, November 19, 2023

Waterfall or Agile: Testing for Performance - Where to Start?

 

Do you understand the Agile?  I have shared my understanding here; give it a read.

The eighth question from season two of 100 Days of Skilled Testing is:

Can you share some best practices for conducting performance tests within an Agile development environment?


Best Practices and the Agile


The irony is, the Agile says, there is no best practice.  It asks, to tailor and fit the practice to the context so the continuous delivery and value is delivered consistently upholding the Agile's principles.  

Yet, we talk about the best practices in the Agile's context, like the eighth question asked here.

What is the effective way to test in the continuous delivery?

As a test engineer, how can I start thinking and testing for performance from the inception of a feature's thought?  I see, it is not hard to do so.  As you read further in this post, you will have a perspective and awareness to do it.


Performance in Waterfall and Agile

I learn, the performance is an experience.  It does not differ because of the Waterfall or Agile.  If the performance is not a pleasing experience, it will impact stakeholders no matter it is Waterfall or Agile.

But, the question when evaluating for the performance is -- where to start, when to start, how to start, and with what to start?

As of today, I do not see differences in the mindset and skills one has to have for testing of performance in Waterfall and Agile.  Could be the approach differs in certain phases here; otherwise, I see the same in both practices.

I will rephrase the eighth question to this:
What is your practice to evaluate the performance right from the start of product development in your project?
I do not want to wait until to hear -- the development is completed and deployed; now we can start running the performance tests.

What can I do as part of performance tests from the first day of development and first commit?  This is my intent and area to look in strategizing the testing and tests.



The Culture of Engineering

At the start and end of the day, when we developers start and finishes the work,

  • How the work is done and why, is defined by the engineering culture practiced by that organization.
    • The Performance Engineering of the software products and solution being built will be driven the by the culture practiced.

The Test Engineering and how we test and automate will be driven by the culture of engineering practiced in the organization.

Writing the code not just for building the functionality, but, also for performance is a culture driven factor.  The organization's culture for engineering practice drives it!



Testing for Performance - Where to Start?


I'm sharing my research work that I'm doing and experimenting on performance engineering and performance tests.  I'm seeing the results and value out of it and so are the stakeholders.

Today, we are getting skilled in exploring and testing without the requirement document and SLAs in hand.  Isn't it?  Haven't you?

I use my MVPT to figure out what are the minimum performance tests for the feature.  As part of this, I will explore with help available aids to evaluate the performance.

To start, I will use these questions to figure out the performance tests:
  • What is the minimum viable questioning performance tests that you have got to test this feature?
  • What is the minimum viable questioning performance tests that you have got to test this workflow?


Unit Tests for Time and Space Complexity


I will work closely with programmers to gather information on below when the code for the feature is committed as part of Unit Tests.
  • The execution time taken by the code of that feature - the Big O Notations for space and time complexity
    • Usually the Unit Tests focuses on functional tests and clean code practice
    • But, when we test team ask and push for performance data, this can come as part of Unit Tests
      • An architect or a principal engineer can set an expectation on
        • What should be the time and space complexity of a code for a feature?
          • Each functions and blocks need to be evaluated on this
          • As said earlier, this depends on a engineering practice culture of an organization
            • If the culture wants it, it will be there; else just the functional code will be delivered and not the performance code
      • If the time and space complexity analysis outcome is not as expected, the code written has to rethought and refactored
        • The review process need to put it back
        • The comment with data has to be published
          • This will be useful to model the performance tests by test engineers who will be working on performance tests
      • Doesn't it look a like a effective useful practice as part of Performance Engineering right in the early stage?
        • This is very well applicable to projects running on Agile or Waterfall

Do you have this in your project and Unit Tests written?

The time and space complexity questions should not be confined just to the SDETs [test engineer] interview.  A test engineer has to ask for it and apply it in her or his day-to-day work.


Profiling Tests by Test Engineers


We testers do not get into product's code analysis.  We have to build skill to run the profiling on product's code and analyze the resources data.
  • Test Engineers can test the feature's code with the help of IDE's profiling (runtime analysis) and collect the performance data by identifying the performance bottlenecks
    • This runtime analysis can profile for
      • Memory snapshots
      • Thread analysis
      • Monitoring resources
      • CPU and allocation profiling
      • And, more
      • The problems and risks can be reported upon analysis
    • Compare the two different solution's approach performance data
This information will tell and indicate where is the risk and problem when we deploy the code.  In my opinion, this is a useful information in modeling further performance tests.  This information is first-hand information which is very powerful before we start using any other performance testing strategies and tools to aid the tests.



Get Started with Performance Engineering and Tests


These are available in the IDE.  We think of performance testing tools and ask how to test for performance.  To be precise, we test developers (test engineers) should change our mind and shift for first.  If not, as I say, we will be the bottleneck for first to ourselves.  Did you know this way of testing for performance?  Why not you introduce this in your project and organization?

If seen, these test practices can be used right from the day we commit the feature's code. This is a place to start for the performance tests.   This will be a differentiator together with MVPT and guides the MVPT to design effective performance tests in the context.

I do not say these are best practices and there is no best practices.  But this is a useful practice when the organization and stakeholders ask for it.  Let your organization and stakeholders know how well you can test for performance right from first commit of product's code.

To stop and end here,
  1. Just do not test for functionality from day one, also test for the performance from the day one.
  2. Influence your organization's engineering culture and developers not just for developing functional code, but, also for the performance code




MVQT: The Testing and Tests with a MVP's Perspectives


I was leading multiple teams and its delivery in a testing service company.  Then, I came up with this thought -- Like MVP, I also have the MVT (Minimum Viable Tests) for a MVP.

Further, I expanded this thought in my day-to-day practice on tailoring to different contexts. I'm observing that it is applying well to the different contexts when I tailor it to the contexts.  After experimenting it for 10 years, I'm sharing this as a blog post.


What is a MVP?

I take this from Eric Ries.  It looks simple and precise to me.

The Minimum Viable Product is that version of a new product which allows a team to collect the maximum amount of validated learning about customers with the least effort.

I see this technique [and a concept] can be applied to anything when I'm developing.  As a test engineer, I develop the tests and test code in major as part of my testing.  On applying the idea of MVP to my testing and deliveries, I see the value and result.

Reading this blog post of me to know who is a developer.


Testing, Tests, MVP and MVQT

In software test engineering, I see the MVP as Minimum Viable Questioning Tests.


The Minimum Viable 'Q' Tests (MVQT) for a focused area of a feature [or to a feature]

  • Helps me to identify the priority tests that should be executed for first
  • Allows me to learn information on priority which matters critically to product and stakeholders
    • So that a informed decision can be made.


The Q in MVQT stands for "questioning".  I read it as Minimum Viable Questioning Tests.  I see the "Q" as a placeholder for the Quality Criteria.  That is, MVFT means Minimum Viable Questioning Functional Tests to a feature or a workflow.




The MVQT are key to know:

  • Have I identified and designed the priority tests?  How do I know that I have got them?
  • Did stakeholders get the information which they wanted to know on priority?
  • Did MVQT help me to
    • Explore and know what I wanted to know about a feature or a workflow?
      • How fast I was here to know and learn this?
      • How did I develop my tests incrementally?  Did I?  If not, then, is it a MVQT?
  • Did MVQT help me to know
    • Am I aligned and in sync with expectations of my stakeholders and customers who are using the software product I'm testing and automating?
  • Did the MVQT help me 
    • In collecting the critical information in a given context for the scope of testing and automation?
    • Do the learning and outcome from this MVQT help to reinforce the validated learning of customer?
  • Do MVQT result support the outcome of Unit Testing result?

The tests in MVQT has to be consistently revised and evaluated to keep it as a MVQT.  Note this, not all tests are MVQT.  If the number of MVQT is growing to a part of feature or to a feature, it is time to think about what is MVQT for you.

The "minimum" tests are highly effective and it helps me learn and test better technically and socially.



MVQT and Testing

  • Sanity or Smoke Tests
    • The set of MVQT which helps me learn can the build be taken further testing
  • MVFT - Minimum Viable Questioning Functional Tests
    • Apply this to a feature or a workflow or to that part which can be evaluated with minimum tests for its functionality
      • To update is this aligning to the validated learning of customer [stakeholders]
  • MVPT - Minimum Viable Questioning Performance Tests
  • MVUT - Minimum Viable Questioning Usability Tests
  • MVAT - Minimum Viable Questioning Accessibility Tests
  • MVTxT - Minimum Viable Questioning Tester's Experience Tests
  • MVST - Minimum Viable Questioning Security Tests
  • MVAF - Minimum Viable Questioning Automation to a Feature
  • MVLT - Minimum Viable Questioning Localization Tests
  • MVUIT - Minimum Viable Questioning UI Tests

You add more of this to your list and context.

In a way, MVQT should ask and look for the testability, automatability and observability.  If this is not happening, then there is no possibility of saying I have got my MVQT.

More importantly, in the CI-CD ecosystem, MVQT pays a major role.  If I should have my tests in the  CI-CD pipeline, then, the MVQT is the way and it focuses on a targeted area to evaluate it.  Else, it is hard, impractical and not possible to test in CI-CD eco system by delivering continuously.


Ask and Review for MVQT

Ask for MVQT, when you review these:

  • test strategy, test framing, test design, test ideas, test cases, test plan, test architecture, test engineering, testing center of excellence, and test code

For example,

  • What is the minimum viable questioning performance tests that you have got to test this feature?
  • What is the minimum viable questioning performance tests that you have got to test this workflow?
  • What is the minimum viable questioning security tests that you have got to test this feature?
  • What is the minimum viable questioning GUI tests that you have got to test this feature?
  • What is the minimum viable questioning contract tests that you have got to test this end point?
Likewise, What is the minimum viable questioning automation tests that you have got to test this feature?

Ask how these tests qualify as MVQT in this context of testing and automation?

This should help you to see how effective is the test strategy in a given context.

Importantly, the MVQT and its effectiveness is a testability to test your tests.



The Credit is to Me

I'm not sure if the idea what I'm saying here in this blog post is practiced by other test engineers.  I have not seen this being discussed about it in public forum.  I have not come across it in my awareness and to the exposure I have put myself.

Hence, I will take this credit to me.  Giving the credit honestly is not a common sight and practice.  I have not got my due credits for using the ideas, thoughts and work that I have come up with.

So, I make it as a open letter and call out that credit for this idea, thought, concept, and practice will be to me when you listen, use and practice it.



Thursday, July 21, 2022

Dealing with the Fallacies of a Fallacy

 

One of my mentees asked me to help in identifying and understanding the fallacies in Software Testing.  I did not know the context in which the help was sought.  All I got is, on reading the book from Gerald M. Weinberg, the mentee wanted to understand and know the testing fallacies better and in simple terms.  For "fallacy", I understand it as -- a misconception resulting from incorrect reasoning and a false belief.  

Further, I learn that reasoning and belief are also heuristics. Can the heuristic be a fallacy? I see, the heuristic can be a fallacy.


The Reality of the Fallacy is a Fallacy

I will keep this blog post layered and oriented with technical lines so that it becomes easy for anyone in tech to understand my thoughts.  As I write this, I get hit by this question -- "Fallacy is a Fallacy?".  With that, I'm left with a successive question -- Fallacy is a Fallacy? Is that not a logical question? 

When I mean logical, I understand logic is one of the aspects of rational, scientific, and systematic analysis.  The analysis has limitations, knowns, and unknowns.  Further, this is super covered by a meta context which includes the uncertainty -- we are aware of and not aware of in our analysis.

When I write this, I see the word "meta context" in my mind.  I don't know if someone has used it earlier.  I presume, someone should have definitely used it when talking about engineering and systematic rational analysis. 

When we work on an engineering problem, we work with a context.  In that context, we learn 

  • the problem, 
  • need (requirement), 
  • assumptions we make, 
  • what we know, 
  • what we do not know, 
  • potential solutions, 
  • approaches, 
  • execution, and more

The engineer in me says, there is a meta context for every context.  Doing engineering to the meta context is an over-engineering is what I understand.  

Engineering to a context, by solving the risks and problems which are identified in that context, is what we all are doing, today.  This is my observation!  An example of this is the software system that we are building and continuing to consistently develop to be updated for the need.  The software system we are building, testing, and deploying is bound to a context and not to the meta context.

In Software Engineering, we work on a context, and, that itself is huge engineering.  Eventually we start seeing the context in which we work as a meta context, while it is actually not.  This is one of the fallacies which we encounter and most times do not identify it.  You see?  Then how to think about the meta context which comprises the infinite contexts from which we have picked a context to engineer and solve?

Once we try and continue to be aware of meta context and what it has, we start to learn everything is a fallacy, including the fallacy.  That's enough philosophical from me.  But, that's the reality and fallacy, as well. 

That said, thinking is a fallacy.  We know that exhaustive testing is not possible.  Likewise, exhaustive thinking is not possible.  When one's thinking is not exhaustive and bounded, don't a fallacy exist there? 

One's scientific and logical thinking is modeled and sampled over a few models, space and dimensions.  The decision from this thinking, practice, and testing will have limitations and fallacies that are noticed and unnoticed.

If an organism can think, then that organism will undergo the influence of a fallacy.  And, the organism can learn to identify fallacy, if at all it understands -- I can be fooled no matter what.  That is one of the byproducts of testing -- knowing the few possible ways how one can get fooled.  And, we have no leisure and luxury to find "all the ways"; this bound brings in fallacies in one's belief, thinking, work and decision.  So I say we work in a context which is pulled out of a meta context.

I see this is the stem of fallacy; the fallacies get wired to our thought process and to the engineering we do. Our systematic and scientific interpretation accepts the fallacy as -- logical, and systematic, and claims the problem we're solving is solvable.  Note that, when I say solvable here, I mean, we can deal with it for the costs and value we get out of it.  By doing so, we handle and manage the fallacy to yield the value.


What Did I Read Just Now?

Well, what you read above are engineering philosophical thoughts of me.  Now, let me pull that to the Software Engineering and Software Test Engineering.

The software system or a hardware system or any system that we have built is an assumption.  We assume it works because we work to make it work.  And, we sense that it works because we adhere to the protocols which define these assumptions.

So that tells me, that anything and everything is built, and being built is an assumption and has protocols. And if anything is working, it is on assumptions.  If anything has failed to work, it is on our assumptions.  That infers me, that rational and systematic analysis is an investigated and experimented assumption.

These protocols and assumptions can blind us to fallacies and limits us to not identify the fallacy.  On witnessing an incident, the fallacy or the outcomes of a fallacy may get uncovered a bit.  That is what we do in the RCA -- Root Cause Analysis.  We do the RCA so that we learn the fallacy in which we got trapped.

On RCA for an incident, we will experience a similar or same problem again.  Why?  We think, that once we do the RCA and practice, we do not repeat the mistake -- this is a fallacy too.  We do a new mistake, which leads to another RCA.  Does that means, the RCA of an incident says not to fall for the same fallacy again but okay for another fallacy?


Managing Self with Fallacy

I too fail in identifying the fallacies.  I continue to prompt my thinking and analysis to see the obvious traps while I test and deliver the testing.  I do not identify all the fallacies in a context.  I will work to find the list of fallacies that brings the most cost in testing delivery and system development technically.

Here are a few questions that I ask myself each time in my test session and analysis:

  1. What are the five contexts where this is a problem or risk?
  2. What are the five contexts where this is not a problem or risk?
  3. What are the five ways where this looks to work as expected?
  4. What are the five ways where this does not work as expected?
  5. What are the five contexts that matters most about this system and I have missed to know them?
  6. In what contexts this bug is not a bug anymore? Why?
  7. In what contexts this will be a bug/problem/risk/cost? Why?
  8. What are the influencing factors and practices considered in making this decision? In what contexts do these factors and practice displace the value with the cost?
  9. What are the assumptions and beliefs that are driving my testing?  Whose assumptions and beliefs are they?
  10. Do I know that I can be fooled?
  11. Do I see any problem here?
  12. Do I see any value here?
  13. Do I see any cost here?
  14. What More Can I See Here?

Understanding and learning -- how my team and stakeholders attach the importance to the same information, helps me. This potentially hints me if they are under influence of any fallacies.  I learn, the context in which team members and stakeholders are also influencing the importance attached to the same informationSometimes, the team and stakeholders use the same word; but, I notice they have other meanings.

This has lead me to learn, it is not about being precise or not for first; it is about, having the ability to communicate and help each other to have the clarity in what is expected.  And, how to achieve this clarity considering the thought processes and beliefs that each stakeholders hold, is a must to understand.

To sum up, we cannot avoid ourself from the fallacy.  What is not a fallacy at present, it can and will be a fallacy in coming time.  The goal is to how we manage to identify and deal with the fallacy which is influencing us and our work.

There is no escape from the fallacies!


Note: The count of words "fallacies" and "fallacy" in this blog post is 47.



Wednesday, August 4, 2021

The Special Characters and Context

 

On creating a new password be it on a web page or mobile app or desktop application or any interface, we encounter the phrase "special characters".  And, we might see few characters represented as special characters.  Why these characters are named "special characters", here?


The Context

When one mentions "special characters" I learn and associate a context to it.  The context defines the character is special or not.  If so, why certain characters are marked as special characters for the password being created?

The context of web and HTML is a journey and evolution.  The web and HTML that existed 20 years back are not the same today.  It has evolved and so are browsers.  So the other technology i.e. desktop applications and mobile apps.


Special Characters and Context

I learn the context will make a character into a special character.  Then what's a special character?  It is a casually used phrase for the non-alphanumeric character on the keyboard.

Few of us might debate and say -- comma, colon, plus, hyphen or minus, hash, dollar, angle brackets, etc., these are all normal characters though it is non-alphanumeric.  Did people (users of the software) had special meaning for these non-alphanumeric characters in their domain of work?

But the comma, period, semicolon, hyphen, space, dollar, hash, angle brackets, etc., all have specific contextual meanings in HTML and web, and other technologies.  Do you think so?!  

The initial web technologies were not robust as today to sanitize characters and process as we do today.  Could be, for this reason, certain characters were termed as special characters and mentioned what to use and what not to use.  I'm not sure is this the reason but this could be one of the strongest reasons.

Today, the phrase "special characters" is continued to use in all major technology organization's documentation and interfaces.  Is this incorrect?  I don't know.  It helps someone to quickly relate and let her/him decide, is what I see.


Parsing and Context

Entering a password, today we assess the strength of it. There are readily available scripts and libraries that do this job.  Not sure if it was available two decades back.  Other than the security aspect of having better entropy what else is the benefit of having special characters?

Say, the special characters are those which I don't see on the keyboard layout. Then what should I think of the angle bracket (< and >) that I use in an XSS payload on the web page and behind the web page?  Note that the same angle bracket can be used in a password too.

Personally, I feel this is one of the good topics to discuss.  It can lead to learning how we term and use the word or phrase for non-alphanumeric characters.  

I don't know if this discussion is needed or not and how much it helps people who are accustomed to the phrase "Special Characters" for certain characters.  But having one does no harm and it can light up the dark areas which are unseen.

The web and desktop projects in which I worked a decade back, it had the RegEx written in different languages and scripts written in Shell, Perl, and VBScript.  These scripts and RegEx were used behind the interface to parse and validate certain characters' presence and absence.  These characters were termed as special characters in the product and it was on par with the operating system documents for consistency.  Also, there was a unique meaning and purpose for such characters here in this context.

Since these scripts and Regular Expressions were used, the characters that take a special meaning in this context were termed as Special Characters.  To keep everyone who uses the product (engineers, support, and customer) be aware of certain characters, it was termed as Special Characters in the context of product and technology.


Should Change the phrase "Special Characters"?

I don't know!

Look at the context where it is used and what characters are classified as special characters.  Changing the phrase to another phrase or word, does it solve and ease the communication with the product's users and business?  Unfortunately, not all software products might bring this change.  Having different words/phrases in the system, add additional costs?  What are those costs?

All I understand is when certain characters are classified as special characters, I look for

  1. The context in which it is classified and why
  2. How it is special? 
  3. What differences it makes in its presence and absence?
  4. Software platform terminologies on which the product runs having such classified special characters

Not fixing nor refining nor refactoring certain existence looks better in few cases!  As a technical person knowing what it is and not, is a need and helps.



Sunday, August 11, 2013

Test Coverage Ideas and Test Ideas – Part 1


In one of my practice session, I was brainstorming for the test ideas.  I got a question, what am I covering in the ideas?  The product being tested in the session was Triangle application for testers programmed by James Bach. With this question, I toured the application for 20 minutes to learn what it is.

While doing this, I understood, to have better and effective coverage, there needs to be the models of Triangle application and not just one model.  From here, I started to build the models. It took time and experienced the pauses frequently. Learning what I’m undergoing, the strategy for approaching the mission changed.

I love experimenting. This work is an experiment being carried out for making my testing useful, informative and valuable. The strategy now is to write the Coverage Ideas for first by touring the product.  Then use, the Coverage Ideas to build tests based on what I need to cover for the testing context.  Doing this I noticed the advantages and disadvantages. Advantage is, there is no need to sketch out the model in detail as I have the Coverage Ideas (detailed model guiding structurally) which shows me test ideas. Doing this I get the mental model for each types of coverage.  Disadvantage with this is, it might consume time in initial stage using this strategy for Test Engineering. On practice, the time might reduce depending on testing context, testability, tester skills and other factors of system being tested.

With this, I have got more than one Coverage Models to test and I made note of it consciously. And, I brainstormed for test ideas on each Coverage Model knowing the testing context. This is very interesting for me.  I learn the Test Coverage Idea as – an experimental question and the actions which helps in identifying and building the various coverage models (an idea) of a system for testing context. There by aiding to evaluate the coverage dimensions of the Test Ideas coming out of each Test Coverage Ideas on a reference.

In simple to summarize, Test Coverage Ideas --> Build Coverage Models (test ideas) with a reference to the testing context --> Brainstorm and identify (design) the tests; execute the tests; evaluate, observe and make notes.  These all activities can be executed in parallel as well.

To make the start, I decided to brainstorm and write test coverage ideas for one or two sessions. Then pick these test coverage ideas, execute and make note of test ideas coming out of from respective coverage model. This gives me structure for the test ideas and reference to evaluate it for knowing the coverage extent of it.  Meanwhile, I make note of the new test ideas as testing continues.

Drawing a Test Coverage diagram, it helped me knowing -- what I’m covering and to what extent, what I have covered and to what extent, and what is not yet covered and should be covering. The testing context and different models of system helped in seeing the dimensions that a test should take and can take.


Figure: Context Free Test Coverage Model


The above diagram shows, how little and/or effectively we cover the varied dimensions for a test. Bigger the three circle’s common intersection space, it is likely that tests carry the desired dimensions and variance when evaluated against a reference in testing context. The space when just two circle intersect can also be useful in terms of coverage, but it might not be effective with dimension attributes of another missing circle.

I will be sharing the test reports coming out of this model in next successive threads of this post.