I've been getting a lot of response in a post I wrote regarding BDD and automation for QA solutions.
Several QA people have written me, or posted that they feel this is not the role of QA and that developers should in fact maintain this... one clever gent even tried to bolster his point by claiming "Well it has Development in the BDD title itself!"
Some dev teams do development (in the sense of customer facing development) using a BDD model, but where QA is concerned, the "development" aspect would most likely be automated test cases.
Since BDD has an entire aspect of organizing the tests, a tester doesn't need to "know code" to write the tests in a BDD fashion. The "development" that QA would apply to this, would be the code required to automate the test. But the test itself is written in human language.
In the Test Planning and Shaping phase, the person is writing the human language tests:
Feature: Login Screen
Scenario: A user logging in with valid data
Given a registered user at the login screen
When they pass valid credentials
Then they are loaded to their Dashboard Page
Scenario: A user logging in with invalid data
Given a user at the login screen
When they pass invalid credentials
Then they are given an error message
Scenario: Repeated attempts at logging in with invalid data result in a lock of the account
Given a user at the login screen
When they pass invalid credentials
And repeatedly pass invalid credentials for up to 3 times
Then their account is locked
....
That's how you scope out the specifications of a feature in BDD. Anyone can do this. At eHarmony, this was done by both the QA members, as well as Product Managers. That's right, Product Managers also wrote BDD tests, as no code is needed to write these test features anyone in the business or product units at a company can contribute.
Some fellow wrote a completely biased and unfair review of BDD and automation (you can check out his points here http://www.satisfice.com/blog/archives/638)
The problem he has with BDD is he sizes it unfairly. He suggests that a single BDD test would have to account for every potential possibility of a large scope feature... adding in dozens of "And" statements. But that just isn't true and completely misses the whole process.
In reality, a single feature could have dozens of scenarios.
In the example above, we take the feature concept from Product (the story), and break down how the behavior should work into scenarios described with "given / when / then." This will include the "happy path," "negative cases," "edge cases" and so forth.
It's the same thing a QA member would do in scoping out their test cases.
As QA, don't you write test cases? Don't you need to show people what you actually signed off on?
Of course QA does. Think of this as simply organizing QA test cases into these BDD Specifications.
Other times you, as a QA tester may find an issue in the 3rd phase of exploratory testing. Or you might think of a new idea to test after hours.
Some have written me saying, this rigidity makes BDD fragile. But the tests shouldn't be rigid. They should be updated each time a new test is thought of, or created. - you simply add that new scenario to the BDD framework.
The BDD tests are a living document. As new tests are thought of, or as bugs are discovered, the tests are updated.
The Automation should be maintained by the QA team, who is using their same QA strategies of breaking the UI/Data Capture/Services.
Just because QA is writing the automation framework, doesn't mean QA changes strategies.
This is another mistake people who are resistant to automation, are making. I approach my automation specifications the same way I approach manual testing. Nothing changes. I'm writing the test cases, just in a BDD spec way.
What's great about BDD, is we have a bridge between the business units and the automation code. When the business unit (Product Managers, Directors, CEO) see the tests written out they might say "Ok this Scenario is good but is there any Scenario covering a case where a user clicks submit twice? We've had issues with that in the past." Maybe they find a problem that they have experience with from the Business side. Or maybe they see the test is really testing a feature in a way they didn't intend it to be designed.
Those who are the stake holders can quickly and easily adjust the tests... as the tests are human language and not code.
Once you have the BDD Specifications with all the Given / When / Then's, and business has signed off on them, you know exactly what to write code for. You wont write code for an inappropriate test.
You also don't need to write an external test plan. All your test cases are covered here in code and are written just like a test plan document. Every test case is defined within the specifications themselves.
My reaction to him was, "I thought you guys were agile..." In an Agile environment you are releasing code every week to two weeks. There's not enough time for QA to write out a formal 20 page test document. Nor will anyone have time to read it.
A better approach in a Agile environment, is to put the BDD specifications (test cases) into the user stories themselves. So in Jira (or whatever is used for story/bug tracking), adding in the BDD's to cover all the test cases.
These tests, then become Automation tests as well. They are simply copied out to a file that will be used in automation (i.e. copying the Given/When/Thens to a feature file in Cucumber.)
Example of BDD Specifications with code
Feature: To smoke check each browser configured
Scenario: Go to google.com and search for the term "Cucumber"
Given a user in at google.com
When they search for the term "Cucumber"
Then they google responds with results
In a separate file within Cucumber would be the code needed to automate these steps:
Given /^a user in at google.com$/ do
@browser=Watir::Browser.new(:ff)
@browser.goto "http://www.google.com"
end
When /^they search for the term (.*)$/ do |term|
@browser.text_field(:name='q').set "#{term}"
@browser.send_keys :enter
end
Then /^they google responds with results$/ do
@browser.div(:id=>"search").em(:text=>"Cucumber")
@browser.close
end
Something like that. I'm not sure if that final assertion would work, it was just pseudo code... But, as long as you have a solid foundation of good and accurate tests, the code can just follow.
What people like James Bauch are not following, is that no matter if you are automating or manually testing, you need to have a solid testing foundation. He's just provoking a scenario where the automation strategy has no solid foundation.
I've seen SDET's write better tests then QA Engineers! Mainly because in following this system they discover all kinds of little problems. You end up taking the UI a step at a time.
By the time you are done Automating it, you should have also covered it in one browser (i.e. Manually tested.) Once it's automated, you can swap out the browser type and have it then re-run the tests in a multitude of browsers.
Manual testers will fall prey to test blindness. At eHarmony we had a registration system that we called the "RQ" or Relationship Questionnaire. At one time that Questionnaire was over 300 questions long. We also had this Questionnaire in multiple territories and it was different in each territory. Running this by hand would take most people about 10min per territory... and if we have 6 territories... that's an hour of manual regression just on registration! What if Product wants us to cover more then one browser... say 5 browsers! that's 5 hours of testing for simply covering registration?? So you break out the testing to multiple testers and get that down to an hour... great, but you're consuming 5 testers to do this.
Testers who have to run and re-run and re-re-run these territories become blind to small bugs, and cut corners - missing bigger ones. It's just a human reality, especially when the pressure is on to hit a release date.
Automating something like that saves tons of time, and ensures a basic smoke test. Granted we didn't cover a variety of nuances and changes in the 300 questions. But we could verify that in general it's up and running, the same test people were doing manually.
Automation frees the tester to perform more deep dives and exploratory testing and not be consumed with basic regression.
Several QA people have written me, or posted that they feel this is not the role of QA and that developers should in fact maintain this... one clever gent even tried to bolster his point by claiming "Well it has Development in the BDD title itself!"
BDD - What Is It?
Yes, it is Behavior Driven Development. But don't stop at the title. What is BDD? BDD is a further push of TDD, where we get more specification. It was created by Dan North.Some dev teams do development (in the sense of customer facing development) using a BDD model, but where QA is concerned, the "development" aspect would most likely be automated test cases.
Since BDD has an entire aspect of organizing the tests, a tester doesn't need to "know code" to write the tests in a BDD fashion. The "development" that QA would apply to this, would be the code required to automate the test. But the test itself is written in human language.
BDD Examples
I like to break BDD down to two parts:- Test Planning/Shaping
- Coding
In the Test Planning and Shaping phase, the person is writing the human language tests:
Feature: Login Screen
Scenario: A user logging in with valid data
Given a registered user at the login screen
When they pass valid credentials
Then they are loaded to their Dashboard Page
Scenario: A user logging in with invalid data
Given a user at the login screen
When they pass invalid credentials
Then they are given an error message
Scenario: Repeated attempts at logging in with invalid data result in a lock of the account
Given a user at the login screen
When they pass invalid credentials
And repeatedly pass invalid credentials for up to 3 times
Then their account is locked
....
That's how you scope out the specifications of a feature in BDD. Anyone can do this. At eHarmony, this was done by both the QA members, as well as Product Managers. That's right, Product Managers also wrote BDD tests, as no code is needed to write these test features anyone in the business or product units at a company can contribute.
Some fellow wrote a completely biased and unfair review of BDD and automation (you can check out his points here http://www.satisfice.com/blog/archives/638)
The problem he has with BDD is he sizes it unfairly. He suggests that a single BDD test would have to account for every potential possibility of a large scope feature... adding in dozens of "And" statements. But that just isn't true and completely misses the whole process.
In reality, a single feature could have dozens of scenarios.
In the example above, we take the feature concept from Product (the story), and break down how the behavior should work into scenarios described with "given / when / then." This will include the "happy path," "negative cases," "edge cases" and so forth.
It's the same thing a QA member would do in scoping out their test cases.
As QA, don't you write test cases? Don't you need to show people what you actually signed off on?
Of course QA does. Think of this as simply organizing QA test cases into these BDD Specifications.
Does BDD Spec's Account For Every Edge Case and Potential Scenario?
No. Why? Because not everything can be thought of, at the moment of writing a test. Just like with manual testing and exploratory testing. You could say the same thing about manual QA testers. Sometimes customer facing edge cases are not considered by QA or Product. But we find the error when a customer tries something previously not thought of.Other times you, as a QA tester may find an issue in the 3rd phase of exploratory testing. Or you might think of a new idea to test after hours.
Some have written me saying, this rigidity makes BDD fragile. But the tests shouldn't be rigid. They should be updated each time a new test is thought of, or created. - you simply add that new scenario to the BDD framework.
The BDD tests are a living document. As new tests are thought of, or as bugs are discovered, the tests are updated.
Who's Responsibility Is BDD?
This has come up with some personal correspondence. Some people feel this should be a dev task. But I disagree. Developers in an Agile work environment are kept constantly busy with shorter turn around times to release. They wont have the time to plan development, code, write unit tests and then automate the front end. Nor should they.The Automation should be maintained by the QA team, who is using their same QA strategies of breaking the UI/Data Capture/Services.
Just because QA is writing the automation framework, doesn't mean QA changes strategies.
This is another mistake people who are resistant to automation, are making. I approach my automation specifications the same way I approach manual testing. Nothing changes. I'm writing the test cases, just in a BDD spec way.
Why BDD for Automation?
You can certainly do Automation without BDD. The guys over at TrueCar and BeachBody are using Webdriver and Python... no BDD driven testing at Beach Body (and I'm guessing TrueCar isn't using BDD either). Certainly BDD isn't required... So what's so great about BDD?What's great about BDD, is we have a bridge between the business units and the automation code. When the business unit (Product Managers, Directors, CEO) see the tests written out they might say "Ok this Scenario is good but is there any Scenario covering a case where a user clicks submit twice? We've had issues with that in the past." Maybe they find a problem that they have experience with from the Business side. Or maybe they see the test is really testing a feature in a way they didn't intend it to be designed.
Those who are the stake holders can quickly and easily adjust the tests... as the tests are human language and not code.
Once you have the BDD Specifications with all the Given / When / Then's, and business has signed off on them, you know exactly what to write code for. You wont write code for an inappropriate test.
You also don't need to write an external test plan. All your test cases are covered here in code and are written just like a test plan document. Every test case is defined within the specifications themselves.
The BDD specifications, become my test plan.
Last year, I met with a guy who worked for M-Go. He was kidna surprised I didn't use a big, lengthy word doc with the chapter headings and specified things of what will be tested, and not be tested... etc.My reaction to him was, "I thought you guys were agile..." In an Agile environment you are releasing code every week to two weeks. There's not enough time for QA to write out a formal 20 page test document. Nor will anyone have time to read it.
A better approach in a Agile environment, is to put the BDD specifications (test cases) into the user stories themselves. So in Jira (or whatever is used for story/bug tracking), adding in the BDD's to cover all the test cases.
These tests, then become Automation tests as well. They are simply copied out to a file that will be used in automation (i.e. copying the Given/When/Thens to a feature file in Cucumber.)
Example of BDD Specifications with code
Feature: To smoke check each browser configured
Scenario: Go to google.com and search for the term "Cucumber"
Given a user in at google.com
When they search for the term "Cucumber"
Then they google responds with results
In a separate file within Cucumber would be the code needed to automate these steps:
Given /^a user in at google.com$/ do
@browser=Watir::Browser.new(:ff)
@browser.goto "http://www.google.com"
end
When /^they search for the term (.*)$/ do |term|
@browser.text_field(:name='q').set "#{term}"
@browser.send_keys :enter
end
Then /^they google responds with results$/ do
@browser.div(:id=>"search").em(:text=>"Cucumber")
@browser.close
end
Something like that. I'm not sure if that final assertion would work, it was just pseudo code... But, as long as you have a solid foundation of good and accurate tests, the code can just follow.
What people like James Bauch are not following, is that no matter if you are automating or manually testing, you need to have a solid testing foundation. He's just provoking a scenario where the automation strategy has no solid foundation.
Isn't This a Time Sink?
Not at all. When you get used to writing tests in BDD fashion, are actually doing 50% of the Automation work! As you automate, you're going through the UI and finding new ways and new ideas to break the application.I've seen SDET's write better tests then QA Engineers! Mainly because in following this system they discover all kinds of little problems. You end up taking the UI a step at a time.
By the time you are done Automating it, you should have also covered it in one browser (i.e. Manually tested.) Once it's automated, you can swap out the browser type and have it then re-run the tests in a multitude of browsers.
Does This Replace Manual Testers or Exploratory Testers?
No, it does not. But it great adds to the total quality.Manual testers will fall prey to test blindness. At eHarmony we had a registration system that we called the "RQ" or Relationship Questionnaire. At one time that Questionnaire was over 300 questions long. We also had this Questionnaire in multiple territories and it was different in each territory. Running this by hand would take most people about 10min per territory... and if we have 6 territories... that's an hour of manual regression just on registration! What if Product wants us to cover more then one browser... say 5 browsers! that's 5 hours of testing for simply covering registration?? So you break out the testing to multiple testers and get that down to an hour... great, but you're consuming 5 testers to do this.
Testers who have to run and re-run and re-re-run these territories become blind to small bugs, and cut corners - missing bigger ones. It's just a human reality, especially when the pressure is on to hit a release date.
Automating something like that saves tons of time, and ensures a basic smoke test. Granted we didn't cover a variety of nuances and changes in the 300 questions. But we could verify that in general it's up and running, the same test people were doing manually.
Automation frees the tester to perform more deep dives and exploratory testing and not be consumed with basic regression.
No comments:
Post a Comment