Showing posts with label Weekend Testing. Show all posts
Showing posts with label Weekend Testing. Show all posts

Saturday, February 15, 2014

Plan to Digg and Explore the Digg Reader



I picked up this exercise posted by Weekend Testing Australia New Zealand (@WTANZ_) in Weekend Testing (@weekendtesting) on 14th Sep 2013.

The mission given to tester in this practice session is as below.

Create a test plan for digg.com and assume you were asked to test the new Digg Reader product. However the catch is: you only have 1 week to test the whole thing. If you only had one week to test the entire product, what would you test?

I had to understand what was the condition of Digg Reader product to proceed further considering the constraint of one week time. On studying what are the changes and risky areas in Digg Reader for that day's version, I came up with master strategy and initial plan to test the product. However, the strategy and plan keeps evolving as I test.

Looking at the context of product and testing need, I feel the functionality needed the tests to start and cover the other tests spanning to different quality criteria. This report is my session notes having the Test Plan for the given mission.



Thursday, October 6, 2011

Cows growled 'Are you seeing us?'



Practicing testing today looked into Weekend Testing website to update myself with learning I get from there. Words 'Chasing' and 'Spotting' caught my attention there, among other words. These two words were in Weekend Testing Americas session practiced on 3rd September 2011; more details about this, is here.

I had played a dice game once with Pradeep Soundararajan and was not able to crack it. Reading the experience report of this session, I was tempted to play this puzzle now. Read the mission what the facilitator had mentioned and it was 'about pattern analysis and note taking using a bulls and cows game.' Continuing to read further, read the mission provided as below:

  • Try, practice and learn about the "Bulls and Cows" game.
  • Compare the differences in a gameplay with and without notes taken.
  • Report (in a form of discussion during debriefing) your findings and ideas on using note-taken in exploratory testing.

Reading the experience report, tried to understand what is this game with an example given by session facilitator. I did not have a partner to play the game, so opened the online version of it. Looking at description of game got more idea of it and selected 7 digits game.

Number -- 1208175 was populated and I did not know from where and how to start game. Looked back to mission provided then looking at the game I got questions which I took it out on a notepad. Those questions are written below:

  • I have played games such as Hangman and Dots with friends. What I did there?
  • I have to make 7 digits in a sequence formulated by an algorithm. How shall I start?
  • What details about those 7 digits I have with me now?
  • If I do not have them or have partial or/and ambiguous details, how will I know it?
  • What should I do now with the data I have?

These helped me to plan and strategize the moves in finding that digit by putting numbers in appropriate places. Plan was to find those numbers in steps that I'm comfort with and note down why I'm choosing it. In parallel, document what I'm doing and use it further to find better step in finding the digit chosen by algorithm.

Moving ahead my mind went blank and it was yet to get started. So what I wanted was those 7 digits and how will I know the 7 digits, was the question running before starting to play. With hard push on self, started saying, "my first move will be to identify those 7 digits among 0 to 9." In between a spam SMS dropped in to my cell phone inbox. Sender had repeating number in the sender number. This triggered a question, I have never thought of number repeating more than once. This helped though I took of my mind from the mission to identify the possibility of repeating numbers in 7 digits.

Kick started to design the tests for identifying those 7 numbers. Entered numbers 0 to 9 that repeated for 7 times, for ex: 1111111. This showed the numbers that were present in the 7 digits. Found a number repeating twice in 7 digits number with help of 'Bulls' count. Finally got those 7 numbers and made note of it.

Next task was to place them in an order; started to play with those seven digits. Felt like it was not so easy. Waiting for a moment, got an idea of using the number that was not available in the 7 numbers identified to find the appropriate position of them in the digit picked by an algorithm. Meanwhile, continued to take the notes in parallel playing the game. This idea of using number not available in the 7 digits number, helped to place the numbers appropriately together with assistance of notes made in course of playing the game.

Strategies kept evolving as playing of game progressed. Have a feel that strategies used to assist and match up with the plan plotted to finish the game, was reasonable at this point of time. If I had used my observations better to look at 'Cows,' probably would have got the all seven 'Bulls' in 3 or 5 steps lesser. Tempted to see how I played? You can see it here.



Monday, June 21, 2010

Reassemble The Bridge: Weekend Testing 38


Mission: To solve the puzzle with help of a partner.


Mental Modeling and Approach:

With the image observed for fraction of a second and URL of puzzle read in chat transcription of WT 38, gave an instinct -- what I saw may be bridge with blue sky and greens.

The scattered images pieces appeared to have different colors and boundary shapes. Few image pieces appeared to have same color but varied in its contrast to my eyes. These pieces had pictures that looked like hanging bridge, moving car, green grass and water. This helped to get a picture in my mind with help of the picture I saw, when I browsed the URL of puzzle.

Approach I used to play puzzle was varying one factor at a time and identifying its patterns.


Playing this puzzle helped me to practice and learn:
  • Observing.
  • Identifying patterns.
  • Factoring.
  • Reasoning.
  • Testing.

Weekend Testing report is here. My report is here.

Note: I did not play as per the mission statement. I played puzzle individually.


Wednesday, March 17, 2010

European Weekend Testing 09 -- Test and Experience Report


Mission:

You are moving from lovely Europe with measurements based on the metrics system to the US with imperial units. Test Converber v2.2.1(http://www.xyntec.com/converber.htm) for usability in all the situations you may face. Report back test scenarios for usability testing until 4.30pm GMT on bug repository.

Application Under Test: Converber 2.2.1
Actual Event Date: 13th March 2010, 09:00 PM IST
Start Time: 14th March 2010, 09:03 AM IST
End Time: 14th March 2010, 10:15 AM IST
Test Machine: Windows XP SP2
Tester: Ravisuriya


Modeling of context:

I am a common man with elementary school education and a very rare computer user. Working for a mining firm in Europe now transferred to US for few years on contract job. Came to know by a colleague that measurement unit changes in Europe and US. With the help of a person in Internet Center browsed for the measurement information. Found few details of Imperial and US customary measurement systems. Later used tool Converber V2.2.1 to note measurement unit I needed.


Understanding of term ‘Usability’:

Usability: How comfort is it for a user to use AUT? How quick can the user learn to use the application for desired purpose? What efforts by the physically challenged people need to put for using the application? Does application supports the Operating Systems accessibility features? What efforts does the user need to put while using it?


Session Report:

I did not participate in the EWT 09; but I practiced it offline on the next day. Report from EWT team is here. Chat transcript of discussion session is here. PDF document of my report is here. The usability scenarios that I identified during the practice session:
  1. User launched the AUT. Was user able to use AUT with the GUI objects available in it when AUT launched? Did the GUI objects help or retarded the speed of using AUT? Was the naming convention of GUI objects was self explanatory to user?
  2. User entered values for a selected unit. How to identify the value seen is of type ‘Imperial’ or ‘US customary unit’?
  3. Users want AUT to have a default option of either ‘Imperial’ or ‘US’ units.
  4. User selected units are same; what message is displayed now?
  5. User not aware whether the calculation of unit value for desired, is correct or not. How to test this by user?
  6. User entered variant value which is not acceptable by the selected unit. How such instances are handled?
  7. User need to convert of area, distance, volume, speed, power, pressure, luminance, temperature and other common used units in industries and daily life. Is there any option to see most common units converted or used across the globe?
  8. Whether all Imperial units are available in AUT for conversion to US customary values?
  9. User feels it was difficult to learn using AUT. Searches for help manual. Is a Help manual available feature, limitations, bugs and contact details?
  10. User wants to enter new units that do not exist in AUT. If added, how the conversion procedure for added units can be added in AUT?
  11. How simple the words and contents are available in AUT, so that user with no much school education can use application with ease learning of units and conversion?
  12. User does not understand English well and can use AUT in Farsi. Does AUT support of Farsi or other languages? Does all the displayed and available contents in AUT are shown in Farsi?
  13. User changed language in AUT to ‘English’. Did the AUT have any Farsi words displayed now?
  14. User did not know much information about the Imperial and US Customary Values. Did the help manual have that information for user to understand them?
  15. User wanted to have print outs of the converted unit values. Does AUT support the print? What are the ways the user can save the converted unit values for using it later? Does AUT support all of these or few among them? Which is most commonly used option to store the converted unit values by user?
  16. User entered the value for a selected unit. The converted unit showed scientific notation. How user can convert that scientific or mathematical notation to values which is understood by user?
  17. User entered a value which was not valid for a chosen unit. Can the user identify the displayed pop up dialog is for invalid value entered and it is from AUT? Closing the dialog will it allow user to continue using the AUT? Does the dialog appear in foreground or in background of AUT? If appeared in background which user cannot identify it, will user be able to continue using AUT? What options are available in displayed dialog to assist user?
  18. User is visually challenged. How the AUT does help user now to know the converted unit or to convert the values into US units from Imperial?

It was a good exercise for me to identify the usability scenarios. At end of this session, had a feel that I did not do well enough in identifying the usability scenarios. When I am finding few more such usability ideas after the session, I could have done better in identifying scenarios; need more such practice.


Weekend Testing 28 -- Test and Experience Report


Mission: There are three tasks to be completed today. Time Duration: 1 hour.

Task 1:
Complete the game: http://www.gamesforthebrain.com/game/dragger/
Objective: Send the screenshot where the picture is built right.

Task 2:
Score 90 points in the game:
http://www.gamesforthebrain.com/game/memocoly/
Send the screenshot. Checkpoints: URL of the game, IQ Score, "Your solution is right, congratulations! (+10 points)"

Task 3:
Score 50 points in the game. http://www.gamesforthebrain.com/game/numberhunt/
Objective: Send the screenshotscreenshot. Checkpoints: URL of the game, IQ Score, "Your answer xx is right, congratulations! (+10 points)"

Date: 13th March 2010, 3:20 PM IST
Machine: Windows SP3
Browser: Firefox 3.5.8
Tester: Ravisuriya

Context: Tester was given mission of playing the game in 60 minutes. Power cut for about 15 minutes before starting the mission tasks. Power came back and had 40 minutes remaining to complete session. Saw testers asking questions to facilitator about the mission and tasks. Started by taking up first task.

To Deliver:
  1. Screenshot of picture built right from Dragger.
  2. Screenshot of score 90, URL and sentence that tells you have scored 10 points from Memocoly game.
  3. Screenshot of score 50, URL and sentence that tells you have scored 10 points from NumberHunt game.

The mission was of confusion to me as it said "complete the game", "Score 50 points" and "Score 90 points". It did not tell why I should score just 50 and 90 points or more. Procedure if any to adhere while playing the games was not mentioned. During the power cut, tried to understand the mission.


Assumptions:
  • Looking at the game description in mission statement, I thought it would be time limited.
  • Tools can be of help here for to accomplish the mission.

Tools that helped me more to accomplish mission:
  • White paper and Pencil.

Tasks:

Task-1) Dragger:

Browsing through the URL given in the mission found jumbled piece of an image. I tried clicking button ‘Refresh’ to find any simple image that I can think of to arrange quickly. Found one jumbled picture that is easy one for me and completed the mission-1.

There was no restriction that these particular image pieces needs to be put in right frame. I thought it was an opportunity for me to choose picture of my interest.


Task-2) Memocoly:

For first couple of tries kept looking at the screen with attention diverted no where. But, this made my eyes strain with no spectacles. I keep the brightness and contrast of being viewed monitor to less than 50. I felt this could affect if I fail to recognize as what color they were. Thinking of how to over come, I devised to use numbering system of regions.

Mental Modeling of 4 parts of a geometric shape that appeared as square:
  • Labeled each section as 1, 2, 3 and 4.
  • Later wrote the numbers on the sheet as per the region of square blinked.
  • Then clicked on those regions based on the numbers I wrote.
  • This helped me to complete the mission in good time.

Task-3) NumberHunt:
  • Used Microsoft calculator to calculate the displayed numbers.
  • This helped me to complete the game-3 bit quicker.

Finally was able to accomplish the mission i.e., to complete three tasks. Among few strategies I wrote for completing the tasks, I used the strategy of using the tool to accomplish mission. The discussion session had interesting discussion and chat transcript can be found here. Facilitator came up with interesting thought for discussion on Weekend Testing forum. The discussion topic title was -- "we are testers (who plan to meet the mission) or testers (who aim to improve our skills)" My report of three tasks as PDF document is here.


Sunday, March 14, 2010

Weekend Testing 27 -- Test and Experience Report


Mission: To generate test ideas to test Google Buzz with quality criteria Performance.

Date: 06th March 2010, 3:05 PM IST.
To Deliver: Report of performance testing ideas for Google Buzz.
Tester: Ravisuriya

Context: Tester has been asked to look for test ideas to test performance quality criteria of Google Buzz. It was an opportunity and first time, a tester was working with performance quality criteria. Google Buzz was live and used by users for buzzing.


Started of with mission statement. I was not very sure what 'performance' meant in mission. With a question to WT session facilitator, assumed 'performance' here as "how quick and responsive it is". Further continued to collect details regarding the context in which 'performance' was considered here. But it remained as open question and was left to tester choice in session. With mental modeling myself as a tester who is looking out performance test ideas for Google Buzz, began to brainstorm.


Assumptions made while brainstorming:
  • Was not sure what the ‘performance’ means in the test mission.
  • A thought, does the performance mean – “how well was Google Buzz was performing among with similar other available social networking services”?
  • Or how well the desired purpose of application satisfies the desired claims with given constraints by processing user actions and its throughput interval.
  • Did assume for this session, the ‘performance’ is how quick and responsive it is.
  • Various users are using Google Buzz at present, with no similar internet connectivity speed.

Few questions that I got while brainstorming about users and Google Buzz performance:
  • Performance test ideas in what context?
  • Who are the users and what kind of users are using Google Buzz?
  • How often they have used and using Google Buzz? What are their observations and perceptions of performance?
  • What devices did they use to browse the Google Buzz? Did this device too have any influence on their perception of Google Buzz performance?
  • How long did they use Google Buzz in a stretch i.e., without any break?
  • What environment did they have or do they have while using Google Buzz? Did it have any influence?
  • Any tools are used to know or understand the performance parameters of application under test?
  • How the performance is being measured? What are the units of measurement of performance in this context?

Brainstorming for Test Ideas:
  • Identify the kind of users, their ages, business, purpose, environment, consistency of using Google Buzz. Few to mention here. This keeps growing as and when brainstorming and testing sessions is on.
  • Identify the possible and potential contexts with each of this environment above said.
  • Usage of analytics and log files of Google Buzz. The log files and analytical information regarding performance of application under test to have time of request processing and of throughput.
  • Test on various possible hardware and *software configurations and network topologies. Note: *software and hardware may be that are installed on device from where Google Buzz is browsed and on Google Buzz server.
  • Does Google Buzz can be accessed by mobile phones? If yes how the configuration and applications can have influence on time factor with Google Buzz responsiveness?
  • Type of database Google Buzz makes use of.
  • Handling the incorrect or invalid entries. How quick the application is responsive to user here?
  • How quick does it handle if I keep buzzing with a very **small interval of gap between two buzz? Note: **small – will be decided on application’s minimum and maximum tolerance value.
  • Does any other application using internet or network will have influence on Google Buzz’s performance?
  • Study of features available or provided by Google Buzz. This might give more ideas how performances tests can be devised.
  • Tests for knowing how the performance of application goes, when number of parallel users increased and decreased, number of transactions in a given time period varied to threshold and least, time consumed to recover from these two edges of variations etc.
  • Performance of application when it's potential reaches to the extreme. What time it takes to handle the requests from clients?
  • Type of server being used. Type of connections. Any data stored by application on client is not stored or stored partially – how the performance of application goes here.
  • How Google applications being used by user simultaneously along with Google Buzz choke up the time. Need to identify such application(s) of Google and need to be tested along with Google Buzz.
  • Other vast applications used over internet and Google Buzz.
  • Maximum requests that can be handled at a given time. (This includes various actions using various features of Google Buzz).
  • Usage of Google Buzz over ***network that is not comfort for using of application. Note: ***network – properties of network.
  • If any tools or program used to look into the performance rating of application, what time is taken by tool to work on this? Does the time taken here by tool or program instructions have any influence?
  • How the request of client is sent to server?

The test ideas I got during this session were of generic to performance. I will use these test ideas to develop specific performance tests. Interaction with Dr. Meeta Prakash showed how these ideas can be turned to be more specific to performance test ideas. Discussion session transcript is here. PDF document of my session report is here.