31 December 2012

QA Using BDD

I've been getting a lot of response in a post I wrote regarding BDD and automation for QA solutions.

Several QA people have written me, or posted that they feel this is not the role of QA and that developers should in fact maintain this... one clever gent even tried to bolster his point by claiming "Well it has Development in the BDD title itself!"

BDD - What Is It?

Yes, it is Behavior Driven Development.  But don't stop at the title.  What is BDD?  BDD is a further push of TDD, where we get more specification.  It was created by Dan North. 

Some dev teams do development (in the sense of customer facing development) using a BDD model, but where QA is concerned, the "development" aspect would most likely be automated test cases.

Since BDD has an entire aspect of organizing the tests, a tester doesn't need to "know code" to write the tests in a BDD fashion.  The "development" that QA would apply to  this, would be the code required to automate the test.  But the test itself is written in human language.

BDD Examples

I like to break BDD down to two parts:
  1. Test Planning/Shaping
  2. Coding

In the Test Planning and Shaping phase, the person is writing the human language tests:
Feature: Login Screen
Scenario: A user logging in with valid data
Given a registered user at the login screen
When they pass valid credentials
Then they are loaded to their Dashboard Page

Scenario: A user logging in with invalid data
Given a user at the login screen
When they pass invalid credentials
Then they are given an error message

Scenario: Repeated attempts at logging in with invalid data result in a lock of the account
Given a user at the login screen
When they pass invalid credentials
And repeatedly pass invalid credentials for up to 3 times
Then their account is locked

....

That's how you scope out the specifications of a feature in BDD.  Anyone can do this.  At eHarmony, this was done by both the QA members, as well as Product Managers.  That's right, Product Managers also wrote BDD tests, as no code is needed to write these test features anyone in the business or product units at a company can contribute.

Some fellow wrote a completely biased and unfair review of BDD and automation (you can check out his points here http://www.satisfice.com/blog/archives/638)

The problem he has with BDD is he sizes it unfairly.  He suggests that a single BDD test would have to account for every potential possibility of a large scope feature... adding in dozens of "And" statements.  But that just isn't true and completely misses the whole process. 

In reality, a single feature could have dozens of scenarios. 

In the example above, we take the feature concept from Product (the story), and break down how the behavior should work into scenarios described with "given / when / then."  This will include the "happy path," "negative cases," "edge cases" and so forth.

It's the same thing a QA member would do in scoping out their test cases.

As QA, don't you write test cases?  Don't you need to show people what you actually signed off on?

Of course QA does.  Think of this as simply organizing QA test cases into these BDD Specifications.

Does BDD Spec's Account For Every Edge Case and Potential Scenario?

No.  Why? Because not everything can be thought of, at the moment of writing a test.  Just like with manual testing and exploratory testing.  You could say the same thing about manual QA testers.  Sometimes customer facing edge cases are not considered by QA or Product.  But we find the error when a customer tries something previously not thought of.

Other times you, as a QA tester may find an issue in the 3rd phase of exploratory testing.  Or you might think of a new idea to test after hours.

Some have written me saying, this rigidity makes BDD fragile.  But the tests shouldn't be rigid.  They should be updated each time a new test is thought of, or created. - you simply add that new scenario to the BDD framework.


The BDD tests are a living document. As new tests are thought of, or as bugs are discovered, the tests are updated.

Who's Responsibility Is BDD?

This has come up with some personal correspondence.  Some people feel this should be a dev task.  But I disagree. Developers in an Agile work environment are kept constantly busy with shorter turn around times to release.  They wont have the time to plan development, code, write unit tests and then automate the front end.  Nor should they.

The Automation should be maintained by the QA team, who is using their same QA strategies of breaking the UI/Data Capture/Services.

Just because QA is writing the automation framework, doesn't mean QA changes strategies.

This is another mistake people who are resistant to automation, are making.  I approach my automation specifications the same way I approach manual testing.  Nothing changes. I'm writing the test cases, just in a BDD spec way.

Why BDD for Automation?

You can certainly do Automation without BDD. The guys over at TrueCar and BeachBody are using Webdriver and Python... no BDD driven testing at Beach Body (and I'm guessing TrueCar isn't using BDD either).  Certainly BDD isn't required...  So what's so great about BDD?

What's great about BDD, is we have a bridge between the business units and the automation code.  When the business unit (Product Managers, Directors, CEO) see the tests written out they might say "Ok this Scenario is good but is there any Scenario covering a case where a user clicks submit twice? We've had issues with that in the past."  Maybe they find a problem that they have experience with from the Business side. Or maybe they see the test is really testing a feature in a way they didn't intend it to be designed.

Those who are the stake holders can quickly and easily adjust the tests... as the tests are human language and not code.

Once you have the BDD Specifications with all the Given / When / Then's, and business has signed off on them, you know exactly what to write code for.  You wont write code for an inappropriate test.  

You also don't need to write an external test plan.  All your test cases are covered here in code and are written just like a test plan document.  Every test case is defined within the specifications themselves.

The BDD specifications, become my test plan. 

Last year, I met with a guy who worked for M-Go.  He was kidna surprised I didn't use a big, lengthy word doc with the chapter headings and specified things of what will be tested, and not be tested... etc.

My reaction to him was, "I thought you guys were agile..."  In an Agile environment you are releasing code every week to two weeks.   There's not enough time for QA to write out a formal 20 page test document.  Nor will anyone have time to read it.

A better approach in a Agile environment, is to put the BDD specifications (test cases) into the user stories themselves.  So in Jira (or whatever is used for story/bug tracking), adding in the BDD's to cover all the test cases.

These tests, then become Automation tests as well.  They are simply copied out to a file that will be used in automation (i.e. copying the Given/When/Thens to a feature file in Cucumber.)

Example of BDD Specifications with code
Feature: To smoke check each browser configured
  Scenario: Go to google.com and search for the term "Cucumber"
    Given a user in at google.com
    When they search for the term "Cucumber"
    Then they google responds with results

In a separate file within Cucumber would be the code needed to automate these steps:
Given /^a user in at google.com$/ do
  @browser=Watir::Browser.new(:ff)
  @browser.goto "http://www.google.com"
end
When /^they search for the term (.*)$/ do |term|
  @browser.text_field(:name='q').set "#{term}"
  @browser.send_keys :enter
end
Then /^they google responds with results$/ do
  @browser.div(:id=>"search").em(:text=>"Cucumber")
  @browser.close
end

Something like that. I'm not sure if that final assertion would work, it was just pseudo code... But, as long as you have a solid foundation of good and accurate tests, the code can just follow.

What people like James Bauch are not following, is that no matter if you are automating or manually testing, you need to have a solid testing foundation.  He's just provoking a scenario where the automation strategy has no solid foundation.

Isn't This a Time Sink?

Not at all.  When you get used to writing tests in BDD fashion, are actually doing 50% of the Automation work!  As you automate, you're going through the UI and finding new ways and new ideas to break the application.

I've seen SDET's write better tests then QA Engineers!  Mainly because in following this system they discover all kinds of little problems. You end up taking the UI a step at a time.

By the time you are done Automating it, you should have also covered it in one browser (i.e. Manually tested.)  Once it's automated, you can swap out the browser type and have it then re-run the tests in a multitude of browsers.

Does This Replace Manual Testers or Exploratory Testers?

No, it does not.  But it great adds to the total quality.

Manual testers will fall prey to test blindness.  At eHarmony we had a registration system that we called the "RQ" or Relationship Questionnaire.  At one time that Questionnaire was over 300 questions long.  We also had this Questionnaire in multiple territories and it was different in each territory.  Running this by hand would take most people about 10min per territory... and if we have 6 territories... that's an hour of manual regression just on registration!  What if Product wants us to cover more then one browser... say 5 browsers!  that's 5 hours of testing for simply covering registration?? So you break out the testing to multiple testers and get that down to an hour... great, but you're consuming 5 testers to do this.

Testers who have to run and re-run and re-re-run these territories become blind to small bugs, and cut corners - missing bigger ones.  It's just a human reality, especially when the pressure is on to  hit a release date.

Automating something like that saves tons of time, and ensures a basic smoke test.  Granted we didn't cover a variety of nuances and changes in the 300 questions.  But we could verify that in general it's up and running, the same test people were doing manually.

Automation frees the tester to perform more deep dives and exploratory testing and not be consumed with basic regression.

30 December 2012

Why QA Automation Is Needed

I have some detractors... It's my fault, I drew them with my own detraction of a blog post I saw elsewhere... and they came to repay the favor I suppose. The detractors come in a varying degrees of an anti-automation philosophy.

Rather than talk theory or throw pseudo data around, I wanted to give a real life case study.  How Automation and BDD done right can save the day for QA.

Some back story:

I started out as a front end dev, many years ago at Warner Bros..  I switched to become a QA tester, when I worked at Warner Bros.  After that, I went into QA engineering at Yahoo and elsewhere.  Yahoo was a very technical company, as was eHarmony.  At eHarmony I learned a lot about service architecture, about code, deployments, automation, non SQL solutions and a variety of other things.

I started non-technical, and ended up writing my own code, building deployment strategies, creating automation frameworks, etc.  It's been an interesting journey and I am not afraid to "roll up my sleeves and do the dirty work of manual testing."

I know there's these characters who run around saying they are automation QA and refuse to do any manual testing.  That's not cool.  But at the same time, QA needs to have the focus to write code, and test code.

After I left eHarmony, I got a job at a company that had no QA team at all.  I took on the role of QA Lead.  During the interview, I was asked, "How would you approach a problem, where there is no QA?"

I answered that with, "I would treat it as an automation problem.  First I would quickly build out a automation framework, and then get as much of the code base captured into it, that I could handle quick turnarounds on regression." 

That's my honest answer, and it has greatly benefited the company as well as myself.

Automation Goals

I had the automation framework up and running by the end of Day 1. By the end of the first week, I had a local install of Jenkins running and working with the automation tests.  By the end of week 2, I had the entire sprint coverage automated.

Automation detractors tend to say that a focus on automation takes away from manual testing.  But it doesn't have to. If done right, it should only enhance the manual testing and exploratory testing.  In fact, manual and exploratory testing should be done within the automation process itself.

Example:

When I started my most recent job, I looked at their QA situation.  Knowing little of their application, I started with this process:
  1. I got Cucumber up and running
  2. I went through the previous written tests from the Business Unit and met with them to get an idea of the application workflow.
  3. I translated their current sprint's tests into Given / When / Thens that I would late put into Cucumber.  They had a classic step by step test plan (1. do this, 2. now do this. 3. do this... 4. you get this result.)  I converted all that into BDD. 
  4. Back in Cucumber, I pasted the Given / When / Then scenarios into the feature files.
  5. Then I looked at the UI I would be testing.  For each step of the G/W/T/ I would go through it in the UI.  I would manually test it (manually running the test plan itself), and then get ideas for new tests (exploratory testing.)  As I got new ideas, I added more G/W/T's. 
  6. Finally, I would stich the Gherkin language elements (given/when/then) to the actual element id's in the UI. 
  7. I wrote out sign off strategies and best practices
By the second week, I had:
  1. Built out the Automation Framework
  2. Had all the previous sprint work in automated tests
  3. Configured the tests to run via Jenkins
  4. Triggered Jenkins to run parallel tests in multiple browsers and began looking into future Grid solutions.
  5. Provided bug/defects into their process and gave input into developing out the processes they had in place.
This gave me the flexability to kick off an ad-hoc regression in all browsers.

Does that mean I'll only rely on this automation in the future? Certainly not!  I continue to manually run through the site... a good automation eng. has to, in order to automate the stories.   The team comes up with new stories every two weeks. That's more code to a) write given/when/then test plans b) manually test c) automate to cover future regression.  It can seem daunting if you think of it as separate processes, but the way I do it, it's all one process.  This is all being done at the same time!

We have a lot of future goals, like moving Jenkins to a server and integrating the test runs with each dev commit.  But for now, the QA side greatly helps me, being the only QA representative in the company. 

If I were just doing manual testing, sure I could breeze through their sprints, just doing the testing in multiple browsers and spend my remaining time exploratory testing...   But where would that leave us later on?  What happens when it's crunch time, and I really need help? I need to regress all our past sprint work, and then cover a ton of new tests turning over to QA late in the life cycle???

Regression is the bane of manual QA.  It becomes a chore, and it wears down the QA resources.  I've seen it create what I call "test blindness" in manual testers.  At my previous job, I saw testers hit the same test they've seen a dozen times, and they have to test each test in 5 browsers or more... and they either just cut corners, or just become blind to a obvious error. 

By adding an Automated UI regression we greatly increase the quality of the deployments.  Just as adding Unit Tests greatly increases code quality.

Approaching the Automation

Approaching automation should be with the same QA mindset of approaching manual testing.  You have a new feature (say a web form that captures data.)  You think "ok this should work by inputing data and hitting save..."  sure, but you think "what happens if I pass in French, special characters, symbols, Portuguese, or Korean? How does it handle White Space?"  These same exploratory questions are also asked and tested during the Automation Test Creation Time.

As these are captured, the tests can be run and re-run, freeing the QA person up to look into more tests for the Sprint, other compatibility issues, other exploratory tests, etc.

What Works

To do this effectively, a company needs to hire a QA lead who knows how to set up automation as well as give QA the time and resources necessary to accomplish this.

At one previous job, I was called onto emergencies almost every day.  It was so insane, that I couldn't do my day job - let alone find any time for automation.  The heads of the company would say "Automation is our priority" till it wasn't (which was every other week) and have me doing some manual testing of a p0 bug fix, or urgent requirement change. 

QA needs to have focus.  If you have to have a separate team for automation (in a highly political organization) then that's a solution.  But where I'm at now, they give me respect and let me lead this process.  That's what really has worked for me.

To Sum It All Up

You can't rely on Automation to do everything.
You can't rely on Manual Testers to catch it all.

For me, I found a bridged solution where doing one process, creates both Automation and Manual testing. 

27 December 2012

Jenkins Fixed

I've had this on and off again problem with Jenkins running my tests.

Turns out, that my local problem (Windows install) was related to the Jenkins windows install not allowing the Jenkins service to run applications in the foreground!

That's why the browsers never came up when it was running.  Sometimes it would have permission errors and time out!

So, to resolve it, I found this:
http://stackoverflow.com/questions/9618774/jenkins-selenium-gui-tests-are-not-visible

basically, you first

KILL the Jenkins service running on windows

Then run java -jar jenkins.war from the Jenkins install folder (i.e. c:\program files(86)\Jenkins

That will start the server and it will kick off the browsers run on the tests! 

16 December 2012

Hit a wall with Jenkins

I hit a wall.

I got the CI all set up with Jenkins and my github and my local git repo.  However, when Jenkins tries to run my tests, It throws an error.  First it was on Ubuntu which threw an error : "
  unable to obtain stable firefox connection in 60 seconds"
 
So I went to my local windows to try and debug.  I can run rake run locally and it works perfectly.  But when Jenkins does it, I get this error on the first Cucumber test:
 Given a user clicks the features tab                     # features/step_definitions/features_tests.rb.rb:1
      Timeout::Error (Timeout::Error)
      C:/RailsInstaller/Ruby1.9.3/lib/ruby/1.9.1/net/protocol.rb:146:in `rescue in rbuf_fill'
      C:/RailsInstaller/Ruby1.9.3/lib/ruby/1.9.1/net/protocol.rb:140:in `rbuf_fill'
      C:/RailsInstaller/Ruby1.9.3/lib/ruby/1.9.1/net/protocol.rb:122:in `readuntil'
      C:/RailsInstaller/Ruby1.9.3/lib/ruby/1.9.1/net/protocol.rb:132:in `readline'
      C:/RailsInstaller/Ruby1.9.3/lib/ruby/1.9.1/net/http.rb:2562:in `read_status_line'
      C:/RailsInstaller/Ruby1.9.3/lib/ruby/1.9.1/net/http.rb:2551:in `read_new'
      C:/RailsInstaller/Ruby1.9.3/lib/ruby/1.9.1/net/http.rb:1319:in `block in transport_request'
      C:/RailsInstaller/Ruby1.9.3/lib/ruby/1.9.1/net/http.rb:1316:in `catch'
      C:/RailsInstaller/Ruby1.9.3/lib/ruby/1.9.1/net/http.rb:1316:in `transport_request'
      C:/RailsInstaller/Ruby1.9.3/lib/ruby/1.9.1/net/http.rb:1293:in `request'
      C:/RailsInstaller/Ruby1.9.3/lib/ruby/1.9.1/net/http.rb:1286:in `block in request'
      C:/RailsInstaller/Ruby1.9.3/lib/ruby/1.9.1/net/http.rb:745:in `start'
      C:/RailsInstaller/Ruby1.9.3/lib/ruby/1.9.1/net/http.rb:1284:in `request'
      ./features/step_definitions/features_tests.rb.rb:3:in `/^a user clicks the features tab$/'

I looked in the output and saw that jenkins added --profile default
Thinking this was a issue with Rake, I removed calling rake, and just used a command: cucumber features... but the same result.  

So it's not the OS, it's not the Rake File... not the webdriver, not the browser... I've seen others in the same spot, but with no resolution....

14 December 2012

Getting Cucumber working in the Amazon Cloud with Jenkins

This took a bit of work, and a variety of online resources.

First I used this tutorial to set up AWS Amazon cloud services for free, hooking it up to github, getting rvm/ruby installed, etc.:
http://watirmelon.com/2011/08/29/running-your-watir-webdriver-tests-in-the-cloud-for-free/

Second, I had to do a git clone of my repo to the AWS box and do a bundle install so that all dependancies were loaded

Third, I hit an issue with cucumber-rails... I used this resource to resolve it:
http://datacodescotch.blogspot.com/2011/11/warning-cucumber-rails-required-outside.html

Fourth, I had an issue with no JS executor on the ubuntu cloud box.. so I installed node.js to get past that:
https://github.com/joyent/node/wiki/Installing-Node.js-via-package-manager

Finally, the tests ran, but got webdriver errors... needed to install webdriver... :)