31 December 2012

QA Using BDD

I've been getting a lot of response in a post I wrote regarding BDD and automation for QA solutions.

Several QA people have written me, or posted that they feel this is not the role of QA and that developers should in fact maintain this... one clever gent even tried to bolster his point by claiming "Well it has Development in the BDD title itself!"

BDD - What Is It?

Yes, it is Behavior Driven Development.  But don't stop at the title.  What is BDD?  BDD is a further push of TDD, where we get more specification.  It was created by Dan North. 

Some dev teams do development (in the sense of customer facing development) using a BDD model, but where QA is concerned, the "development" aspect would most likely be automated test cases.

Since BDD has an entire aspect of organizing the tests, a tester doesn't need to "know code" to write the tests in a BDD fashion.  The "development" that QA would apply to  this, would be the code required to automate the test.  But the test itself is written in human language.

BDD Examples

I like to break BDD down to two parts:
  1. Test Planning/Shaping
  2. Coding

In the Test Planning and Shaping phase, the person is writing the human language tests:
Feature: Login Screen
Scenario: A user logging in with valid data
Given a registered user at the login screen
When they pass valid credentials
Then they are loaded to their Dashboard Page

Scenario: A user logging in with invalid data
Given a user at the login screen
When they pass invalid credentials
Then they are given an error message

Scenario: Repeated attempts at logging in with invalid data result in a lock of the account
Given a user at the login screen
When they pass invalid credentials
And repeatedly pass invalid credentials for up to 3 times
Then their account is locked

....

That's how you scope out the specifications of a feature in BDD.  Anyone can do this.  At eHarmony, this was done by both the QA members, as well as Product Managers.  That's right, Product Managers also wrote BDD tests, as no code is needed to write these test features anyone in the business or product units at a company can contribute.

Some fellow wrote a completely biased and unfair review of BDD and automation (you can check out his points here http://www.satisfice.com/blog/archives/638)

The problem he has with BDD is he sizes it unfairly.  He suggests that a single BDD test would have to account for every potential possibility of a large scope feature... adding in dozens of "And" statements.  But that just isn't true and completely misses the whole process. 

In reality, a single feature could have dozens of scenarios. 

In the example above, we take the feature concept from Product (the story), and break down how the behavior should work into scenarios described with "given / when / then."  This will include the "happy path," "negative cases," "edge cases" and so forth.

It's the same thing a QA member would do in scoping out their test cases.

As QA, don't you write test cases?  Don't you need to show people what you actually signed off on?

Of course QA does.  Think of this as simply organizing QA test cases into these BDD Specifications.

Does BDD Spec's Account For Every Edge Case and Potential Scenario?

No.  Why? Because not everything can be thought of, at the moment of writing a test.  Just like with manual testing and exploratory testing.  You could say the same thing about manual QA testers.  Sometimes customer facing edge cases are not considered by QA or Product.  But we find the error when a customer tries something previously not thought of.

Other times you, as a QA tester may find an issue in the 3rd phase of exploratory testing.  Or you might think of a new idea to test after hours.

Some have written me saying, this rigidity makes BDD fragile.  But the tests shouldn't be rigid.  They should be updated each time a new test is thought of, or created. - you simply add that new scenario to the BDD framework.


The BDD tests are a living document. As new tests are thought of, or as bugs are discovered, the tests are updated.

Who's Responsibility Is BDD?

This has come up with some personal correspondence.  Some people feel this should be a dev task.  But I disagree. Developers in an Agile work environment are kept constantly busy with shorter turn around times to release.  They wont have the time to plan development, code, write unit tests and then automate the front end.  Nor should they.

The Automation should be maintained by the QA team, who is using their same QA strategies of breaking the UI/Data Capture/Services.

Just because QA is writing the automation framework, doesn't mean QA changes strategies.

This is another mistake people who are resistant to automation, are making.  I approach my automation specifications the same way I approach manual testing.  Nothing changes. I'm writing the test cases, just in a BDD spec way.

Why BDD for Automation?

You can certainly do Automation without BDD. The guys over at TrueCar and BeachBody are using Webdriver and Python... no BDD driven testing at Beach Body (and I'm guessing TrueCar isn't using BDD either).  Certainly BDD isn't required...  So what's so great about BDD?

What's great about BDD, is we have a bridge between the business units and the automation code.  When the business unit (Product Managers, Directors, CEO) see the tests written out they might say "Ok this Scenario is good but is there any Scenario covering a case where a user clicks submit twice? We've had issues with that in the past."  Maybe they find a problem that they have experience with from the Business side. Or maybe they see the test is really testing a feature in a way they didn't intend it to be designed.

Those who are the stake holders can quickly and easily adjust the tests... as the tests are human language and not code.

Once you have the BDD Specifications with all the Given / When / Then's, and business has signed off on them, you know exactly what to write code for.  You wont write code for an inappropriate test.  

You also don't need to write an external test plan.  All your test cases are covered here in code and are written just like a test plan document.  Every test case is defined within the specifications themselves.

The BDD specifications, become my test plan. 

Last year, I met with a guy who worked for M-Go.  He was kidna surprised I didn't use a big, lengthy word doc with the chapter headings and specified things of what will be tested, and not be tested... etc.

My reaction to him was, "I thought you guys were agile..."  In an Agile environment you are releasing code every week to two weeks.   There's not enough time for QA to write out a formal 20 page test document.  Nor will anyone have time to read it.

A better approach in a Agile environment, is to put the BDD specifications (test cases) into the user stories themselves.  So in Jira (or whatever is used for story/bug tracking), adding in the BDD's to cover all the test cases.

These tests, then become Automation tests as well.  They are simply copied out to a file that will be used in automation (i.e. copying the Given/When/Thens to a feature file in Cucumber.)

Example of BDD Specifications with code
Feature: To smoke check each browser configured
  Scenario: Go to google.com and search for the term "Cucumber"
    Given a user in at google.com
    When they search for the term "Cucumber"
    Then they google responds with results

In a separate file within Cucumber would be the code needed to automate these steps:
Given /^a user in at google.com$/ do
  @browser=Watir::Browser.new(:ff)
  @browser.goto "http://www.google.com"
end
When /^they search for the term (.*)$/ do |term|
  @browser.text_field(:name='q').set "#{term}"
  @browser.send_keys :enter
end
Then /^they google responds with results$/ do
  @browser.div(:id=>"search").em(:text=>"Cucumber")
  @browser.close
end

Something like that. I'm not sure if that final assertion would work, it was just pseudo code... But, as long as you have a solid foundation of good and accurate tests, the code can just follow.

What people like James Bauch are not following, is that no matter if you are automating or manually testing, you need to have a solid testing foundation.  He's just provoking a scenario where the automation strategy has no solid foundation.

Isn't This a Time Sink?

Not at all.  When you get used to writing tests in BDD fashion, are actually doing 50% of the Automation work!  As you automate, you're going through the UI and finding new ways and new ideas to break the application.

I've seen SDET's write better tests then QA Engineers!  Mainly because in following this system they discover all kinds of little problems. You end up taking the UI a step at a time.

By the time you are done Automating it, you should have also covered it in one browser (i.e. Manually tested.)  Once it's automated, you can swap out the browser type and have it then re-run the tests in a multitude of browsers.

Does This Replace Manual Testers or Exploratory Testers?

No, it does not.  But it great adds to the total quality.

Manual testers will fall prey to test blindness.  At eHarmony we had a registration system that we called the "RQ" or Relationship Questionnaire.  At one time that Questionnaire was over 300 questions long.  We also had this Questionnaire in multiple territories and it was different in each territory.  Running this by hand would take most people about 10min per territory... and if we have 6 territories... that's an hour of manual regression just on registration!  What if Product wants us to cover more then one browser... say 5 browsers!  that's 5 hours of testing for simply covering registration?? So you break out the testing to multiple testers and get that down to an hour... great, but you're consuming 5 testers to do this.

Testers who have to run and re-run and re-re-run these territories become blind to small bugs, and cut corners - missing bigger ones.  It's just a human reality, especially when the pressure is on to  hit a release date.

Automating something like that saves tons of time, and ensures a basic smoke test.  Granted we didn't cover a variety of nuances and changes in the 300 questions.  But we could verify that in general it's up and running, the same test people were doing manually.

Automation frees the tester to perform more deep dives and exploratory testing and not be consumed with basic regression.

30 December 2012

Why QA Automation Is Needed

I have some detractors... It's my fault, I drew them with my own detraction of a blog post I saw elsewhere... and they came to repay the favor I suppose. The detractors come in a varying degrees of an anti-automation philosophy.

Rather than talk theory or throw pseudo data around, I wanted to give a real life case study.  How Automation and BDD done right can save the day for QA.

Some back story:

I started out as a front end dev, many years ago at Warner Bros..  I switched to become a QA tester, when I worked at Warner Bros.  After that, I went into QA engineering at Yahoo and elsewhere.  Yahoo was a very technical company, as was eHarmony.  At eHarmony I learned a lot about service architecture, about code, deployments, automation, non SQL solutions and a variety of other things.

I started non-technical, and ended up writing my own code, building deployment strategies, creating automation frameworks, etc.  It's been an interesting journey and I am not afraid to "roll up my sleeves and do the dirty work of manual testing."

I know there's these characters who run around saying they are automation QA and refuse to do any manual testing.  That's not cool.  But at the same time, QA needs to have the focus to write code, and test code.

After I left eHarmony, I got a job at a company that had no QA team at all.  I took on the role of QA Lead.  During the interview, I was asked, "How would you approach a problem, where there is no QA?"

I answered that with, "I would treat it as an automation problem.  First I would quickly build out a automation framework, and then get as much of the code base captured into it, that I could handle quick turnarounds on regression." 

That's my honest answer, and it has greatly benefited the company as well as myself.

Automation Goals

I had the automation framework up and running by the end of Day 1. By the end of the first week, I had a local install of Jenkins running and working with the automation tests.  By the end of week 2, I had the entire sprint coverage automated.

Automation detractors tend to say that a focus on automation takes away from manual testing.  But it doesn't have to. If done right, it should only enhance the manual testing and exploratory testing.  In fact, manual and exploratory testing should be done within the automation process itself.

Example:

When I started my most recent job, I looked at their QA situation.  Knowing little of their application, I started with this process:
  1. I got Cucumber up and running
  2. I went through the previous written tests from the Business Unit and met with them to get an idea of the application workflow.
  3. I translated their current sprint's tests into Given / When / Thens that I would late put into Cucumber.  They had a classic step by step test plan (1. do this, 2. now do this. 3. do this... 4. you get this result.)  I converted all that into BDD. 
  4. Back in Cucumber, I pasted the Given / When / Then scenarios into the feature files.
  5. Then I looked at the UI I would be testing.  For each step of the G/W/T/ I would go through it in the UI.  I would manually test it (manually running the test plan itself), and then get ideas for new tests (exploratory testing.)  As I got new ideas, I added more G/W/T's. 
  6. Finally, I would stich the Gherkin language elements (given/when/then) to the actual element id's in the UI. 
  7. I wrote out sign off strategies and best practices
By the second week, I had:
  1. Built out the Automation Framework
  2. Had all the previous sprint work in automated tests
  3. Configured the tests to run via Jenkins
  4. Triggered Jenkins to run parallel tests in multiple browsers and began looking into future Grid solutions.
  5. Provided bug/defects into their process and gave input into developing out the processes they had in place.
This gave me the flexability to kick off an ad-hoc regression in all browsers.

Does that mean I'll only rely on this automation in the future? Certainly not!  I continue to manually run through the site... a good automation eng. has to, in order to automate the stories.   The team comes up with new stories every two weeks. That's more code to a) write given/when/then test plans b) manually test c) automate to cover future regression.  It can seem daunting if you think of it as separate processes, but the way I do it, it's all one process.  This is all being done at the same time!

We have a lot of future goals, like moving Jenkins to a server and integrating the test runs with each dev commit.  But for now, the QA side greatly helps me, being the only QA representative in the company. 

If I were just doing manual testing, sure I could breeze through their sprints, just doing the testing in multiple browsers and spend my remaining time exploratory testing...   But where would that leave us later on?  What happens when it's crunch time, and I really need help? I need to regress all our past sprint work, and then cover a ton of new tests turning over to QA late in the life cycle???

Regression is the bane of manual QA.  It becomes a chore, and it wears down the QA resources.  I've seen it create what I call "test blindness" in manual testers.  At my previous job, I saw testers hit the same test they've seen a dozen times, and they have to test each test in 5 browsers or more... and they either just cut corners, or just become blind to a obvious error. 

By adding an Automated UI regression we greatly increase the quality of the deployments.  Just as adding Unit Tests greatly increases code quality.

Approaching the Automation

Approaching automation should be with the same QA mindset of approaching manual testing.  You have a new feature (say a web form that captures data.)  You think "ok this should work by inputing data and hitting save..."  sure, but you think "what happens if I pass in French, special characters, symbols, Portuguese, or Korean? How does it handle White Space?"  These same exploratory questions are also asked and tested during the Automation Test Creation Time.

As these are captured, the tests can be run and re-run, freeing the QA person up to look into more tests for the Sprint, other compatibility issues, other exploratory tests, etc.

What Works

To do this effectively, a company needs to hire a QA lead who knows how to set up automation as well as give QA the time and resources necessary to accomplish this.

At one previous job, I was called onto emergencies almost every day.  It was so insane, that I couldn't do my day job - let alone find any time for automation.  The heads of the company would say "Automation is our priority" till it wasn't (which was every other week) and have me doing some manual testing of a p0 bug fix, or urgent requirement change. 

QA needs to have focus.  If you have to have a separate team for automation (in a highly political organization) then that's a solution.  But where I'm at now, they give me respect and let me lead this process.  That's what really has worked for me.

To Sum It All Up

You can't rely on Automation to do everything.
You can't rely on Manual Testers to catch it all.

For me, I found a bridged solution where doing one process, creates both Automation and Manual testing. 

27 December 2012

Jenkins Fixed

I've had this on and off again problem with Jenkins running my tests.

Turns out, that my local problem (Windows install) was related to the Jenkins windows install not allowing the Jenkins service to run applications in the foreground!

That's why the browsers never came up when it was running.  Sometimes it would have permission errors and time out!

So, to resolve it, I found this:
http://stackoverflow.com/questions/9618774/jenkins-selenium-gui-tests-are-not-visible

basically, you first

KILL the Jenkins service running on windows

Then run java -jar jenkins.war from the Jenkins install folder (i.e. c:\program files(86)\Jenkins

That will start the server and it will kick off the browsers run on the tests! 

16 December 2012

Hit a wall with Jenkins

I hit a wall.

I got the CI all set up with Jenkins and my github and my local git repo.  However, when Jenkins tries to run my tests, It throws an error.  First it was on Ubuntu which threw an error : "
  unable to obtain stable firefox connection in 60 seconds"
 
So I went to my local windows to try and debug.  I can run rake run locally and it works perfectly.  But when Jenkins does it, I get this error on the first Cucumber test:
 Given a user clicks the features tab                     # features/step_definitions/features_tests.rb.rb:1
      Timeout::Error (Timeout::Error)
      C:/RailsInstaller/Ruby1.9.3/lib/ruby/1.9.1/net/protocol.rb:146:in `rescue in rbuf_fill'
      C:/RailsInstaller/Ruby1.9.3/lib/ruby/1.9.1/net/protocol.rb:140:in `rbuf_fill'
      C:/RailsInstaller/Ruby1.9.3/lib/ruby/1.9.1/net/protocol.rb:122:in `readuntil'
      C:/RailsInstaller/Ruby1.9.3/lib/ruby/1.9.1/net/protocol.rb:132:in `readline'
      C:/RailsInstaller/Ruby1.9.3/lib/ruby/1.9.1/net/http.rb:2562:in `read_status_line'
      C:/RailsInstaller/Ruby1.9.3/lib/ruby/1.9.1/net/http.rb:2551:in `read_new'
      C:/RailsInstaller/Ruby1.9.3/lib/ruby/1.9.1/net/http.rb:1319:in `block in transport_request'
      C:/RailsInstaller/Ruby1.9.3/lib/ruby/1.9.1/net/http.rb:1316:in `catch'
      C:/RailsInstaller/Ruby1.9.3/lib/ruby/1.9.1/net/http.rb:1316:in `transport_request'
      C:/RailsInstaller/Ruby1.9.3/lib/ruby/1.9.1/net/http.rb:1293:in `request'
      C:/RailsInstaller/Ruby1.9.3/lib/ruby/1.9.1/net/http.rb:1286:in `block in request'
      C:/RailsInstaller/Ruby1.9.3/lib/ruby/1.9.1/net/http.rb:745:in `start'
      C:/RailsInstaller/Ruby1.9.3/lib/ruby/1.9.1/net/http.rb:1284:in `request'
      ./features/step_definitions/features_tests.rb.rb:3:in `/^a user clicks the features tab$/'

I looked in the output and saw that jenkins added --profile default
Thinking this was a issue with Rake, I removed calling rake, and just used a command: cucumber features... but the same result.  

So it's not the OS, it's not the Rake File... not the webdriver, not the browser... I've seen others in the same spot, but with no resolution....

14 December 2012

Getting Cucumber working in the Amazon Cloud with Jenkins

This took a bit of work, and a variety of online resources.

First I used this tutorial to set up AWS Amazon cloud services for free, hooking it up to github, getting rvm/ruby installed, etc.:
http://watirmelon.com/2011/08/29/running-your-watir-webdriver-tests-in-the-cloud-for-free/

Second, I had to do a git clone of my repo to the AWS box and do a bundle install so that all dependancies were loaded

Third, I hit an issue with cucumber-rails... I used this resource to resolve it:
http://datacodescotch.blogspot.com/2011/11/warning-cucumber-rails-required-outside.html

Fourth, I had an issue with no JS executor on the ubuntu cloud box.. so I installed node.js to get past that:
https://github.com/joyent/node/wiki/Installing-Node.js-via-package-manager

Finally, the tests ran, but got webdriver errors... needed to install webdriver... :)

29 November 2012

Python and SST Frameworks


I updated some tests here to cover a website navigation.  Using SST was a bit more difficult then using Watir.  Mainly because SST doesn't have a lot of coverage for page elements that have no id's... so you have to get more creative. 

For example, they don't  have api methods to handle clicking on a link with only a class element.  So to get around that you  have to look for the text itself (which is usually a fragile idea) or use some other method to find the class and then click on it.

So here's some examples of me using SST with Python:
https://github.com/wbwarnerb/Python_Beachbody/blob/master/Pages/Home.py




25 November 2012

Getting a Python Web Automation Framework going

I've so far been able to get GEB (groovy based web automation) up and running at home... (although it was very difficult) and I've been able to get Cucumber up and running on a web automation stack (very easy) and tonight I think I got a Python stack up and running.

Python was a bit tricky.  Although not as easy to set up as Cucumber, it certainly isn't as difficult as GEB.

Some pitfalls with Python:
  1. You can't easily use Python 3.* Due to compatibility issues with selenium webdriver, Python 3 isn't really supported. I did see some 3rd party scripts to try and bridge that compatibility, but it looked like a dangerous path to go down.
  2. You need to install Python 2.7.3.
  3. After python, you'll need to install setuptools.  If on windows, setuptools has a nice windows installer.  This is the pre-requisite to get PIP (a python package installer) up and running.
  4. You will need a package installer (i.e. PIP) - if you're on windows, PIP's website doesn't really offer much help for you.  Windows users have to download https://raw.github.com/pypa/pip/master/contrib/get-pip.py directly since windows doesn't have curl natively.  
  5. Once you have the get-pip.py, you just run the thing... then pip.exe will be in your python*/Scripts folder. 
  6. Run pip install sst  OR use a IDE (i'm using jetbrains' Pycharm) to point to sst and install.
Once SST is installed and part of your project, you can write a simple test like this:
from sst.actions import *

go_to('http://www.ubuntu.com/')
assert_title_contains('Home | Ubuntu')
close_window()


Be sure to check the SST doc's for referencing their actions, but they are pretty straight forward and similar to native webdriver and selenium actions: http://testutils.org/sst/actions.html

Now you can start building out your framework... I've heard of some BDD python based layers (Nose and Lettuce) out there, but they aren't supported by my IDE (jetbrains see's little community backing BDD in python) so I haven't gone that route.

Uploaded a sample of this code at github:
https://github.com/wbwarnerb/Python_Beachbody

24 November 2012

Python 3 and webdriver incompatible?

I've been looking at this idea of using python with webdriver, in a automation framework.  It seems though that webdriver doesn't support Python 3.

This is pretty frustrating.  I'm mostly interested in languages from the point of view of automation.  I pick up a language and download the latest stable version, and find there is no support on it from any web automation framework... not even webdriver. 

So it looks like Python must be running version 2.* and not 3.* in order to access webdriver.


python basics - whitespace and indentations

This might be obvious to some - but I didn't know about indentations.  In Python indentations within a method, have to be the same.  So this will work:

def main():
    print("something to say")
    print("is something to say")

but this won't -
def main():
    print("something to say")
  print("is something to say")

But this will work:
def main():
    print("something to say")
print("is something to say")

The first example does indentation as Python expects within a method.
The second example fails because the indention is not the same.
The third example works, but will first run the line print("is something to say") and then execute the line in the method, if we have the line:
if __name__ == "__main__": main()
This is because the line is at the same level of the method def.  So it's run first.
Either way the third case is valid, as the last print is at the level of the method.

23 November 2012

python - calling functions before they are defined

Been picking up some python and I noticed that you can define a function in a non linear way... meaning the method call could come before defining the method:

def main():
      function2()

def function2():
     print("this is the second funciton.")

if __name__ == "__main__": main()

That last line there is saying run the file and then the main function after.  So function2() called in main gets defined first.  Then main is run, so it knows what function2() is.

11 November 2012

My Rails Comment Application

I'm taking some courses on Rails.  I built out a web application that uses some 3rd party api's like Devise (for authentication).  The web application is set up to let users create accounts, and post status updates.  I went ahead and published the early version of the app with Heroku:
http://infinite-waters-3206.herokuapp.com/

heroku rake db:migrate

When publishing a database to Heroku, remember to always use heroku rake db:migrate, or else the heroku rails app won't work.

09 November 2012

API Testing

API testing doesn't involve a GUI but the testing does come down to the same concepts of inputs and outputs.  You provide a specific input and you get the expected output.   Just like with any type of testing, the methodology is the same.  You would verify the Happy Path, and try and break the API (passing invalid parameters, too many parameters, etc. and verify errors are thrown gracefully, that databases are not updated with empty rows, or duplicate data... and son on.)

So what is an API?

An API is a interface you can use in development, it stands for Application Programming Interface.  Think of it as a building block of code. Rather then writing that code each time, you have a block of code that can take parameters and use those parameters to do something.  You could pass in parameters to create a user in a db, or perform a query and return data as JSON, or handle authentication (such as Facebook login api's.)  API's can be public or private. 

Verifying An API

Some examples to verify in API testing could be:
  1. What does the API return? If an API is designed to return a result (like data), then you would want to know that a) there is something returned and b) it's what's expected and c) it fits the boundaries of what's expected d) errors are handled correctly
  2. Event Listener Testing: If the API is making an event, you would have to have access to an event listener or event log and verify that the event is a) captured correctly b) it's in the expected format and c) has the correct payload d) errors are handled correctly
  3. Modifying Databases: If the API is making a DB insert or update, you would want to verify that a) the db update/insert/delete occurs b) the data that is changed is correct c) errors are handled correctly.
Early on at eHarmony I had the chance to write a simple API for testing. We were designing a new Automation framework and I needed some code to subscribe users.  This way I could pass in a userid, and it would subscribe this user for me programmaticly.  This was not a production API, but just something used in testing. We had test credit card numbers, that are not valid in production. But it lets us exercise the subscription flow.  So instead of calling the UI on the front end each time to run a subscription (which might take 2 min each test, just to sub a user via the GUI), I would subscribe a user using this simple API. 

What this API was doing was taking the userid as a parameter and passing it into a SQL insert to our test db.

Testing it was amounting to:
  1. The Happy Path: Does the data insert/update for a userid reach the db. New users would be inserted to the db, existing users would be updated. 
  2. Validate the data inserted: are all columns expected to be updated, updated?  Is the user flagged as a subscriber?
  3. Error Cases: What happens if an empty string is passed in? it should error gracefully, does it?  What happens if the user is already a subscriber. That would mean the insert would fail for new users.  Updates would pass.
  4. Events: In this case we didn't want to trigger any events. This should bypass the events, so just double check that no events are recorded in the event service logs.

How To Get Into API Testing

There's a lot of ways a person can pick up API testing on their own.  If you can write code in Rails/Ruby, Groovy, JAVA, etc, you could build your API and then test it!

But... there are easier ways.  You could find a open/public API and work with it.  Some examples of API's you could start with would be:
In fact you can find a lot of online API's that take data and return some sort of result to you.  I even found one for the card game, Magic the Gathering:
http://daccg.com/ajax_ccgsearch.php?cardname=jace%20beleren
You can simply set up a form to pass in a parmeter here (URL encoded of course) and get back some JSON with the result!

The above list is of course online API's, many of them restful service with API's you can hit with a URI endpoint.  But it is easy to get started right now!

08 November 2012

Speed of Cucumber

By day, I work with a framework called GEB, written in Groovy.  By night, I work in my own personal projects in Ruby/Rails and Cucumber.  For test frameworks, I build them in Cucumber.  I've built about 7 different test frameworks in Cucumber.  The latest was on a whim.  After dinner last night, I went to www.beachbody.com the makers of the P90 system, and I decided I'd try and automate their site. 

By the end of the night I had over 100 tests for their navigation.   Not only did I output 100 tests, but these tests work in any browser. 

Here's the tests I wrote last night:
https://github.com/wbwarnerb/cucumber_beachbody/blob/master/features/nav.feature

And here's the code they exercise:
https://github.com/wbwarnerb/cucumber_beachbody/blob/master/features/step_definitions/nav.rb

04 November 2012

Adding Randomness in Tests

Why Randomness?


When testing a UI and having a large selection of dynamic links, it can be helpful to get an idea of break points with the tests, as well as the code.  In other words, if you only write tests for specific points, you may miss some situations and scenarios that were not thought through.

My Google Example

I used Google.  I created some tests that:
  • go to google.com
  • enter a search term
  • click a random element on the page
To help with this example, I created a few tests in  order of complexity.   The first test simply goes to google, enters a term from a table and then gets results.  At the results list, it clicks the first result in the list.

Google's results tend to be hard to directly grab.  They have a list of results.  But to get to them (using WATIR notation) I came up with this (I also had to put a wait_for_present before this to wait for the element ol#rso to load.)  ol#rso is the ordered list of results, so I target it like this:
 @browser.ol(:id, "rso").li(:index=>0).div.h3.a.click

I'm grabbing all the li's and treating them like a list or array.  I use index 0 (WATIR notation) to find the first result row.  Then after getting it, I grab the chain of: div.h3.a and click it.

Adding Randomness

So far there's no randomness here, but the first test gets us going with results and clicking through on them.  Now for the randomness.

It's easy.  I counted the results per page, and came up with trying to get 16 choices.  My solution was to do this:
  @results = rand(15)
  @browser.ol(:id, "rso").li(:index=>@results).div.h3.a.click


Basically it's the same thing as before, except I added a instance variable of @results.  It just comes up with a random number up to 15. 

So now when @results is passed in as a index value it just picks the random row to click.

More Complexity

But lets say we want to take it further and randomly pick a search result page, and then pick a random result on the page. 

  @nav = rand(1..7)
  @browser.table(:id, "nav").tbody.tr.td(:index=>@nav).a.click


In this case I created a instance variable that picks a random result from a range of 1 to 7.  0 is not wanted, because by default, the user is already on the first result page (0.)  So no reason to click on it.  Just want to click from 1 to 7. 

After the results page is loaded, then I rerun the logic to pick a random result from the page.

Issues

I ran into some issues.  The first issue was that the rerunning of the random result on the page.  When I tried calling that again, it wouldn't wait properly.  So I had to recreate the same logic and call it again.

Built out a simple Web Service in Ruby/Rails

I've been getting back to Rails.

After reading through some tutorials and going through a variety of teaching material, I was able to put together a project:
https://github.com/wbwarnerb/restapi/

A lot of this was generated by rails scaffolding.  
But the core code I wrote is in:
https://github.com/wbwarnerb/restapi/blob/master/lib/gwapi.rb
and
https://github.com/wbwarnerb/restapi/blob/master/lib/gwapiclient.rb


03 November 2012

ADB Server start/stop

In playing with the Android SDK, I found some useful info to start and stop the Android client:

/android_sdk_path/platform-tools

adb kill-server - To kill the server forcefully
adb start-server - To start the server

02 November 2012

Some people hate BDD and Automation

I ran across this page recently - a blog by a Software QA tester named James Bach: http://www.satisfice.com/blog/archives/638
it's interesting. I also ran into another QA blog that had similar sentiments.  It's a backlash against the momentum of taking Software QA into a BDD/Automation driven realm.

I was once just like that. But my opinion changed drastically when I saw the benefits to writing code, using BDD and building an automation framework. 

eHarmony presented me with the opportunity to learn some code and be part of a Continuous Integration / Continuous Deployoment program.  In that program, I picked up the language Groovy - and we ultimately used a BDD framework in GEB/Spock.  GEB would be like Cucumber (a automation framework) and Spock is like Gherkin (the BDD language.)

An Example

James Bach's post though has some problems.  In his comparison he has a BDD example of ONE test of a epic arc (Validating an ATM machine), to a manual tester covering tons of "stuff" (i.e. an entire test plan.)  That's not very fair.  So let's make this fair. Lets talk apples to apples.

No BDD framework would have only ONE test.  It would have as many as our need to cover the entire scope of the code being developed.  For a website, for example, you may  have tests that cover:
  • UI Navigation
  • Subscription to the website
  • Communication between members of the website
  • Cancelations
  • Each Section of functionality
  • Advertising
  • Special events (free communication weekends)
  • Registration
Your site/company may have different requirements, but here's a few off the top of my head.  We would never sum up a single BDD test to cover the entire functioning site.  Who would?

Let's take an example, like a form based Registration.  Simple example. Registration is just a form with fields. We would break this down to individual tests (these tests are still the same whether you are using BDD/Manual Testing/Automation Testing):
  •  There would be a test on how to send the data perhaps (json? service call? form post?) However we are capturing data, we have a test for it and verify the data is captured
  • Another test might be front end, happy path: fill out the form in the UI and submit. Then verify the data was captured
  • Another test might be violate the fields with white space
  • Another test might be passing in invalid data (i.e. invalid postal code)
  • Another test might be double clicking submit and verifying only one data entry is captured
  • Another test might be using words on a blocked list
On and on this can go, till we have flushed out our test plan.  In other words, we'd have multiple BDD's to cover each test above.  Just like you would have multiple test cases in a test plan to cover the above areas of tests.  That's the key. James is glossing over the fact that this is all part of a structured test plan.  Whether the manual tester writes an official test plan or not, they are in fact going through a series of steps, covering specific areas and validating input/outputs.  The same thing as an automated test would do.

Lets say we find some bugs while doing some manual testing (yes manual testing is still used).  Say we discover that if we enter special characters for the name field, that they are accepted, when they shouldn't be. So we open a bug on it.  We'll, we also create an automation test to cover each bug found. That is then added to the BDD Feature.

What about cases where the page is dynamic, so inputting one value would get multiple dynamic changes on the page?  The tester, as Bach points out is testing all that. So is the BDD test.  You just create BDD's for each change expected.  It's basic input/output.

Unlike Bach's example where he had a solo BDD test (not considering the entire Feature), and then compared the manual testing of a fully flushed out test plan (which wasn't a fair comparison), I'm going to be much more fair here.  The reality is that the BDD tests should match the test plan and test scenarios manual testers would be doing anyway.  There should be little difference.

In other words: You still have to write a test plan (whether you manually test, use BDD automation, or use a non BDD automation strategy.)  No matter what you need to structure your tests and report your results.  Every tester should agree with that.

Once you agree with that, it comes down to apples and apples. We're now comparing the same thing. The same manual test plan vs. the same automation test plan.

James mistakenly thinks that BDD just focus' on some simple subset of testing. You should have a BDD test to cover each test case... each test case in your test plan, validating the same results you would validate in manual testing.

Efficiencies


James Bach in his responses to comments on his blog post, says "James' Reply: The idea that it helps you release faster is a fantasy based on the supposed value of regression check automation. If you think regression checks will help you, you can of course put them in place and it's not necessarily expensive to do that (although if you automation through the GUI then you will discover that it IS expensive).
BDD, as such, is just not needed. But again, if it's cheap to do, you might want to play with it. The problem I have is that I don't think it IS cheap to do in many cases. I'm deeply familiar with the problems of writing fixtures to attach high level "executable specs" to the actual product. It can be a whole lot of plumbing to write. And the temptation will be to write a lot less plumbing and to end up with highly simplified checks. The simpler they are, the less value they have over a human tester who can "just do it."

James is calling out efficiencies here.

But he betrays himself. He states "if you think regression checks will help you, you can of course put them in place."  I've never heard of regression as a option. It's never an option. You MUST have regression.  James knows that's the shining star of automation and is trying to downplay it to bolster his position.  How could any QA team not do automation?  Many issues, in my experience (and I dare say in general software development) are breaks to previously existing code, due to commits to new features.  The new features get a commit to trunk and the commit unkowningly stifles an event that is needed for another feature - or changes a base class, or property file and something seemingly unrelated snaps.  Regression is NECESSARY.  It's never "well if you want it..." 

As we read on we see he's claiming that it's more efficient to "just do it."  That there's just too much overhead in writing all that plumbing.    If you are testing a new feature, you don't have the luxury of knowing all the ins and outs of it. You have to start from scratch. In that moment you can also write code.  That's right, WRITE THE CODE BEFORE YOU GET THE FEATURE.  When you go to planning for a new feature, the developers will need time.  As they spend time to write code, you will write out your code and test cases. BDD makes this easy - YOUR TEST CASES BECOME YOUR CODE.  How cool is that?

Meaning you write out your test plan in BDD.  At eHarmony we use GEB and the BDD is added in planning into each story.  But this could be Cucumber.  You plan it out, copy the BDD to a Cucumber feature file, for example, and then write the code to work against.

BUT, say some, There isn't any development yet, how could I possibly code for it?  Good question.  That's why you work with your developers hand in hand.  You know the URL you're hitting.  You know the basics, but you won't know the div's in the page. The Class's and ID's you're trying to select.  You may not know the property file name, etc. But you work with the developer and agree on the naming conventions and write your test BEFORE you even get the code.

RED GREEN CLEAN

That's what red, green, clean means. You write the test before you get code - so it fails.  You get the code (if it works) it goes green (if your test needs changes you make the changes and see it go green - or report the bugs to the developer) and finally you refactor (clean) the tests.

In this sense, you are literally building the "plumbing" as James Bach calls it, while you are writing the test plan. Is it really that hard? Cucumber makes it easy. As easy as writing text. 

Of course if you can't write test plans, and you just "wing it" - then this will be very horrible for you. But then you aren't really testing with quality.

HOW BDD HELPS

So how does BDD help?  As I went through James Bach's comments and posts - he suggests it doesn't help at all.  I'll respond with my take on how it does  help.

Test Features become Test Plan Repositories

As you build upon your test feature set in Cucumber, you'll have a growing repository of your entire test plan history.  The feature files are easily human readable.  No code.  The step definitions that they reference/call is where the code goes. So Cucumber feature files contain no code. That means anyone in business can read the test and understand it.

Rapidly Reusable Tests

Tests can be quickly run and rerun.  You don't need to call someone at 10pm to run through a test, or read some testers documentation on how to do something you've never done before. 

Anyone can kick off the tests

Plug cucumber into Jenkins and you have a user interface to simply kick off a test repository against a test environment. No special QA person or Deploy person needed.
Everyone can see the failures and understand them

Builds Tests that Anyone Can Read

If you're writing a  automation framework without BDD, you'll have tests that no one will want to look at, except coders.  You'll have code, with little guidance of the test - save for the occasional comment.  This happens when people take Java and Selenium and try and make a automation framework just from the two.

Tests are as readable as a test plan

Again, the tests are easy to read. Each Scenario ads to the overall feature.  You can organize your features by sections, pages, functionality, etc.  But the feature files are simple to read and understand what's going on.

Data Validation

In Cucumber and GEB, you can build data tables into the test, to rapidly run a test multiple iterations and pass through a variety of values and verify the result.  This is faster then humanly doing this.  I have tons of examples of this on github.  You can have a table of 50 movie titles that you are passing into a service end point, and validating the JSON data returned for each title.  Validating the MPAA RATING, jpg img path, etc.  This test can be kicked off and finish in less then a min. 





Failures are obvious

When tests are run from a tool like Jenkins, and a test fails, the failures are obvious. The failure state is captured as the BDD section the test stops at.  Example, if a test fails at "Given a user is at http://www.google.com" then you know the failure is at that point.  Further details will be in the error message (such as), "time out waiting for page to load."  So you may have a network or proxy error.  Even a non technical person can get an idea of what's going on .

HOW AUTOMATION HELPS

Rapid regression

Unlike James Bach's feeling that regression isn't a necessity, I feel it always is a necessity. Unless your business is making one off pages, I can't see how you would never reuse code!  If you reuse any code, you MUST REGRESS.

At one job, we had two week core code deployments.  This was a major deployment of a large web application/site.  Both front end and back end code, as well as services might be deployed.  Manually regression the trunk for each deployment would take 6-8 testers, something like 4-5 days to cover all regression in 5 browsers (FF, IE8, IE9, Chrome, Safari) + iPad Safari.  Not to mention mobile application regression.  That's a lot of testing.

We would build out a task list, assign people to the tasks, and then have a grid. Each column in the grid was  a browser.  Now figure each test you do, you have to redo 5 times.  Why? Because were a customer focused company.  People gave us money. If they use Mac Safari and have a issue, it's a problem for us. If they use IE8 and can't subscribe, we loose money.  We must cover regression in all supported browsers.

If, instead, we cover regression with automation, the automation might take 6-8 hours to cover the entire site, but that pales in comparison to 4-5 days!

Understanding

When you manually test, you are most likely always in the "black box." That's not necessarily bad, but you don't have any idea how something is made, or how it is working. The more you know, the better edge cases you can find.

Capybara and jQuery

One of the nice things about Cucumber, is the variety of different elements you can add to the stack.  Typically you have in an automation framework:
1. base language
2. framework itself
3. web driving technology (for front end tests)

In GEB it's like this:
1. Groovy
2. GEB (which adds Spock for the BDD ability)

In Cucumber you have a lot of choice  here:
1. Ruby/Java/Groovy/etc
2. Cucumber (which has Gherkin for the BDD ability)
3. Watir/Capybara/Webrat
4. you can throw other stuff on the stack, like Rspec, etc.

I was asked recently, "why do you use Watir for your Cucumber/Ruby tests?"  I basically answered "well because it was there."  It was the first web driving/controlling element to the stack I used.  There are other choices, there is Capybara for example.

While I'm new to Capybara, there are some interesting things I've found with it.  One of the nice things is the ability to execute scripts.  I don't know how to do this with Watir (and hence I'm assuming it's not possible) to execute Javascript or JQuery. 

But in Capybara you can do something like this (examples taken from jnicklas):
page.execute_script("$('body').empty()")
 
Sure enough, Ruby can be used to do calculations (so I'll omit that example.)
What's great about this, is that I can run jQuery.... and what's so great about jQuery?  Well if you use Firebug in firefox, it has a console that lets you run/execute jQuery in the page.  Sometimes elements in dynamic pages, can be a trick to find and manipulate.  I find this true with Microsoft pages for example (like windows.com) - jQuery makes this easier... here's why:

  1. If you work on a team,  you have a front end dev who's usually a wizard of jQuery... so such a person can easily give you query or improve your own to find or manipulate elements on the page
  2. I have found situations where Watir has such a hard time finding an element, and jQuery finds the element fine
  3. jQuery is tried and true and  has a strong community backing it
  4. Firefox lets you run jQuery from the firebug console... this way you can verify what elements you can manipulate and how you can access them.
You can find more about jQuery at: http://docs.jquery.com/

Capybara lets a person run jQuery by simply doing a: page.execute_script and then passing the jQuery in.

Again, maybe Watir has a method of doing this, I haven't found that to be the case though - so when I get in a bind, I use Capybara's execute_script functionality to run jQuery as needed.

The testing stack I personally use for automation allows for using Capybara, along with Watir's easier (imo) way of managing the browsers:
group :test, :development do
  gem 'cucumber-rails'
  gem 'database_cleaner'
  gem 'rspec'
  gem 'spork'
  gem 'capybara'
  gem 'watir-webdriver'
  gem "gherkin", "~> 2.11.2"
end

Everything is an object

One thing I really love about the Ruby/Cucumber framework, is that everything is an object. I'm allowed to do something like this:

divlength = @browser.div(:id=>"divValue").wait_until_present(5).length

I can chain actions/methods to each other real easily... again like
@browser.div(:id=>"search-results-container").wait_until_present(5).click

This is really cool when trying to click a "a" tag in a div like the html might be:
<div id="div1" class="containerdiv" name="links">
   <a href="http://www.someplace.com">click here</a>
</div>

Since the a tag doesn't have a id or class, you can still reference it by chaining it like so (using Watir syntax):
browser.div(:id=>"div1").a.click

It's logical and makes sense. It's also pretty easy.

It's letting me treat the div(:id=>"div1") as an object that I'm applying a method handler of "a" to, which is saying "hey look for the a tag" and then that itself is treated as an object and lets me apply the "click" method to it saying "ok now click that."  You could throw a wait_until_present method in front of the click, to give some time for the tag to load on the page, etc. 

But it gives a great example of everything being an object.

25 October 2012

Cucumber Ruby and Excel

While 95% of my automation work is web service based, I did put together a bit of Excel validation.

I imported the Ruby gem "roo" which enables Excel, csv and OpenOffice support.  Once in, I could create a test like this to verify the headers of a excel file:
 Scenario Outline: Verify Column headers exist
    Given the financial document is loaded
    Then the <headers> should be at line <row> and column <column>

    Examples:
    |row|column|headers|
    |11 |A     |State  |
    |11 |B     |Government Function|
    |10 |C     |Full-time          |
    |11 |C     |employees          |
    |9  |D     |Full-time          |
    |10 |D     |Pay                |
    |11 |D     |(whole dollars)    |
    |10 |E     |Part-time          |
    |11 |E     |employees          |
    |9  |F     |Part-time          |
    |10 |F     |pay                |
    |11 |F     |(whole dollars)    |
    |10 |G     |Part-time          |
    |11 |G     |hours              |
    |9  |H     |Full-time          |
    |10 |H     |Equivalent         |
    |11 |H     |Employment         |
    |10 |I     |Total              |
    |11 |I     |employees          |
    |8  |J     |Total              |
    |9  |J     |March              |
    |10 |J     |Pay                |
    |11 |J     |(whole dollars)    |

And the step definition is like:
Given /^the financial document is loaded$/ do
  @loadFinancial = Excel.new("data/financial.xls")
end

Then /^the (.+) should be at line (.*) and column (.*)$/ do |headers, row, column|
  @loadFinancial.cell(row.to_i,"#{column}") == headers
end

Some things to note... since I'm passing a row # that's an integer and not a string, I need to do a .to_i to it.  This isn't the fastest solution. But if you have small data files that you want to manipulate or validate, this can work.  This test against 46 fields takes my laptop about 16seconds to complete.  That's pretty slow.  But it works.

Data and File testing

I put up a few small tests to cover data validation within a file.  I'll be building off this project in Github with examples:
https://github.com/wbwarnerb/ts/tree/master/features

So far I've just got a few tests.  The first is a test that verifies Data Files have been delivered and just makes sure they exist.  It's look at the root of the drive for a few files.  This can be modified easily.  Here's the Scenario File:
Scenario Outline: Check that file exists
Given a file has come in
Then the <datafile> is verified as being there
Examples:
|datafile|
|setup.log|
|hamlet.txt |
 

Here's the Step Definition code:
Then /^the (.*) is verified as being there$/ do |datafile|
  assert File.exists?("/#{datafile}")
end
 
The second test does a count on the lines in each file.  It then assumes the user knows this count and validates that each file is the correct line count.  Here's the code to accomplish it:

Here's the Scenario:
Scenario Outline: Line Count Validation
Given a file has come in
Then the <datafile> and it's <linecount> are verified
Examples:
|datafile|linecount|
|setup.log|10 |
|hamlet.txt |4463 |

Here's the Step Definition Code:
Then /^the (.*) and it's (.*) are verified$/ do |datafile, linecount|
  count = 0
  File.open("/#{datafile}") {|f| count = f.read.count("\n") } == "#{linecount}"
  puts "The line count for '#{datafile}' is:"
  puts count
end

30 September 2012

Cucumber and ambiguous results

I had some solid tests that kept error out on me.  It stumped me for quite awhile. I was busy looking at the logic, and didn't notice the error was really referencing the Then statement and saying it was "ambiguous."  Since I had several similar tests, I realized that the error was really about the Then statement in my Cucumber test.

All I had to do was change the language of the Then statement to be more unique and the tests began to pass.

27 September 2012

Rails application - Palindrome Validation

I wrote a ruby/rails application to validate a palindrome.  Here's the published palindrome app on Heroku:
http://glacial-river-2425.herokuapp.com/

I came up with the code in my head on the way home from work.  I came across the method .reverse in rails.  This makes checking for a palindrome pretty easy in rails.

After I came home from work I sat down, and ran IRB then input this code:
x = "racecar"
if x == x.reverse
puts "this is a palindrome"
else "this is not a palindrome"
end

It worked.  So now I just needed to put this into a published Rails application. 

I created a Rails project
Generated a Controller and a some views

in the Index view I put this code on the page:
<%=form_tag(:action =>'results') do %>

<%=text_field_tag(:word)  %>
<%= submit_tag("Submit") %>
<% end %>

Then in the controller I put this code:
  def results
    @word = params[:word]
    if @word == @word.reverse
      @palindrome = "This is a palindrome"
    else
      @palindrome = "This is NOT a palindrome"
    end
  end

Finally on the results page:
<%= @palindrome %>

So basically that simple line of code, is used to verify that a word input into the form is a palindrome or not and the results are put to a results page.


22 September 2012

More detail on JSON validation in Cucumber

Here's more details on my last post about doing a JSON smoke test.  The idea here, is that a test is needed to check the schema of the JSON coming back - to verify that the JSON structure itself is intact with the correct categories and sub categories.

I used Rotten Tomato's API end point.  I'll pass in a fixed parameter to a specific movie and then validate the JSON coming back.

The end point URI that i'm hitting is:
http://api.rottentomatoes.com/api/public/v1.0/movies.json?apikey=jc7eaxjfpemb2uz7qsrudnsq&q=Toy+Story+3&page_limit=1

Let's review the JSON response we get with the above query - jsonlint.com shows that the top level categories are:
  • total
  • movies
  • links
  • link_template
Some of these categories have sub categories, and some do not.  For example, Total, does not have a sub category in the JSON, but movies has a bulk of sub categories.  In fact movies has:
  • id
  • title
  • year
  • mpaa_rating
  • runtime
  • critics_consensus
  • release_dates
  • ratings
  • synopsis
  • posters
  • abridged_cast
  • alternate_ids
  • links
some of these of sub categories themselves.  But to just cover these categories and sub-categories here's how I wrote the code in the step definition file:
require 'open-uri'
require 'json'

Given /^A call to the Rotten Tomattoes API$/ do
  @mquery = JSON.parse(open("http://api.rottentomatoes.com/api/public/v1.0/movies.json?apikey=jc7eaxjfpemb2uz7qsrudnsq&q=Toy+Story+3&page_limit=1").read)
end
Then /^the response should have  the (.*) expected$/ do |category|
  @mquery["#{category}"].should be_true
end

Given /^A call is made to the Rotten Tomattoes API$/ do
  @mquery = JSON.parse(open("http://api.rottentomatoes.com/api/public/v1.0/movies.json?apikey=jc7eaxjfpemb2uz7qsrudnsq&q=Toy+Story+3&page_limit=1").read)
end
Then /^the response returned should have the movies (.*) expected$/ do |subcategory|
  @mquery['movies'][0]["#{subcategory}"].should be_true
end
Then /^the response returned should have the (.*) subcategory expected$/ do |release_dates_sub|
  @mquery['movies'][0]['release_dates']["#{release_dates_sub}"].should be_true
end
Then /^the ratings response returned should have the (.*) subcategory expected$/ do |ratings_sub|
  @mquery['movies'][0]['ratings']["#{ratings_sub}"].should be_true
end
Then /^the posters response returned should have the (.*) subcategory expected$/ do |posters_sub|
  @mquery['movies'][0]['posters']["#{posters_sub}"].should be_true
end
Then /^the abridged cast response returned should have the (.*) subcategory expected$/ do |abridged_cast_sub|
  @mquery['movies'][0]['abridged_cast'][0]["#{abridged_cast_sub}"].should be_true
end
Then /^the alternate id's response returned should have the (.*) subcategory expected$/ do |alternate_ids_sub|
  @mquery['movies'][0]['alternate_ids']["#{alternate_ids_sub}"].should be_true
end
Then /^the links response returned should have the (.*) subcategory expected$/ do |links_sub|
  @mquery['movies'][0]['links']["#{links_sub}"].should be_true
end



The Cucumber test I wrote uses a Cucumber Scenario Outline to pass in the expected categories or sub-categories into the code above.  Here's how I wrote the tests:

Feature: Smoke tests to ensure json validity by checking and verifying each JSON category and sub category exists
  #Note: this is not validating values, just that the JSON structure has the expected categories and sub categories

   Scenario Outline: Smoke test the JSON top level categories returned
     Given A call to the Rotten Tomattoes API
     Then the response should have  the <category> expected

     Examples:
     |category|
     |movies  |
     |links|
     |link_template|

  Scenario Outline: Smoke test the JSON movies sub categories returned
    Given A call is made to the Rotten Tomattoes API
    Then the response returned should have the movies <subcategory> expected

  Examples:
    |subcategory|
    |id  |
    |title|
    |year|
    |mpaa_rating|
    |runtime    |
    |critics_consensus|
    |release_dates    |
    |ratings          |
    |synopsis         |
    |posters          |
    |abridged_cast    |
    |alternate_ids    |
    |links            |

  Scenario Outline: Smoke test the JSON release dates sub categories returned
    Given A call is made to the Rotten Tomattoes API
    Then the response returned should have the <release dates_sub> subcategory expected

  Examples:
    |release dates_sub|
    |theater  |
    |dvd|

  Scenario Outline: Smoke test the JSON ratings sub categories returned
    Given A call is made to the Rotten Tomattoes API
    Then the ratings response returned should have the <ratings_sub> subcategory expected

  Examples:
    |ratings_sub|
    |critics_rating  |
    |critics_score|
    |audience_rating|
    |audience_score |

  Scenario Outline: Smoke test the JSON posters sub categories returned
    Given A call is made to the Rotten Tomattoes API
    Then the posters response returned should have the <posters_sub> subcategory expected

  Examples:
    |posters_sub|
    |thumbnail  |
    |profile|
    |detailed|
    |original |

  Scenario Outline: Smoke test the JSON abridged cast sub categories returned
    Given A call is made to the Rotten Tomattoes API
    Then the abridged cast response returned should have the <abridged_cast_sub> subcategory expected

  Examples:
    |abridged_cast_sub|
    |name  |
    |id|
    |characters|

  Scenario Outline: Smoke test the JSON alternate ids sub categories returned
    Given A call is made to the Rotten Tomattoes API
    Then the alternate id's response returned should have the <alternate_ids_sub> subcategory expected

  Examples:
    |alternate_ids_sub|
    |imdb  |

  Scenario Outline: Smoke test the JSON links sub categories returned
    Given A call is made to the Rotten Tomattoes API
    Then the links response returned should have the <links_sub> subcategory expected

  Examples:
    |links_sub|
    |self  |
    |alternate|
    |cast     |
    |clips    |
    |reviews  |
    |similar  |


14 September 2012

Testing API's with Cucumber

I put together several API tests.  Here's the project on Github.com:
https://github.com/wbwarnerb/movielookup

The project has three Cucumber features (under the features directory) that talk with a public API.

For the purpose of this test I'm using Rotten Tomatoes public API.  They have a nice JSON response to parse and utilize for these examples.

Example 1: /features/movie_json_smoke.feature

The first feature is a smoke test it basically validates the json itself.  The idea here is: We know what we expect to return in the JSON, we verify that the categories and sub categories are intact and no changes have occurred. This is useful, if a change to the service was made, but the overall structure of the JSON should remain the same.  Just kick off this test, and it validates the JSON structure.

Here's how I did it in cucumber:
https://github.com/wbwarnerb/movielookup/blob/master/features/movie_json_smoke.feature
from here we have the Cucumber test itself.  I build the test out as a Scenario Outline, this allows me to utilize the data tables.  When I get to the "Then", I pass in a parameter from the table, like this:
Then the response should have  the <category> expected
 
The category parameter matches the data table, which has a column header "category" - every element under that column in the table is passed into the Then statement.

The actual code is in the step definition:
it's here that I grab the parameters coming form the table / test and I pass them into the relevant spots.  
Here's an example:
Then /^the response should have the (.*) expected$/ do |category|
  @mquery["#{category}"].should be_true
end
First, notice the (.*), that is the placeholder for the variable in the test <category>. Second, the argument I'm grabbing |category| is coming from the test. It takes the Category value (from the data table in the test) and passes it through here.

The next line of code reads @mquery (that's the instance variable) ["#{category}"].should be_true
So this is the way ruby picks up the JSON objects ['category']['subcategory'] But since we're passing in a dynamic parameter here, we need double quotes. so ["#{Category}"] (the #{category} will be replaced with the value in the data table.

The .should be_true tripped me up. I kept trying == !nil and .should == !nil, nothing worked until I found it should be done like this .should be_true. This validates that the JSON element the method is attached to is present.
 At this point, I just iterate through each parameter in their JSON returned.  I go through each category and subcategory and make sure that it's all there.  

I figure that's a good smoke test.  If I worked at RT, and some dev committed some code to their service/api, I could just run this and quickly see that json categories are still present.  

Example 2: movielookup_data_validation.feature


In this test, I'm treating a data table in the test as the source of truth.  This could be a database, but I'm using a data table with Cucumber's Scenario Outline.  This time, I've added several columns.  I'm verifying the mpaa_rating and the profile image stored at Rotten Tomatoes.   

The code for verifying this is similar in concept to what I did previously:

I go through several movies... passed through, like this:

Given /^the API is queried for (.*)$/ do |title|
  @mquery = JSON.parse(open("http://api.rottentomatoes.com/api/public/v1.0/movies.json?apikey=jc7eaxjfpemb2uz7qsrudnsq&q=#{title}&page_limit=1").read)
end

The argument is passed through and is passed through to the API URI and grabs that movie's details which are then verified

Notice that the main difference then before is this code:
 @mquery['movies'][0]['mpaa_rating'].should == "#{mpaa_rating}"

The main difference, is that I'm passing the argument through to use as a validation check.  

Example 3: /features/movielookup_json_validation.feature


These tests I separated out of the other two.  These tests are made to just test the JSON functionality itself.


Given /^A request for an unknown movie is sent to the API$/ do
  @bquery = JSON.parse(open("http://api.rottentomatoes.com/api/public/v1.0/movies.json?apikey=jc7eaxjfpemb2uz7qsrudnsq&q=bbxa&page_limit=1").read)
end
Then /^the response should be zero movies returned$/ do
  @bquery['total'].should == 0
end
Given /^A API call is made for more then one movie$/ do
  @mquery = JSON.parse(open("http://api.rottentomatoes.com/api/public/v1.0/movies.json?apikey=jc7eaxjfpemb2uz7qsrudnsq&q=Toy+Story&page_limit=1").read)
end
Then /^the results wil have a total greater than 1$/ do
  @mquery['total'].should > 1
end
Basically in these two simple tests I'm checking that if a query is sent for a movie not in the RT movie database, that we get back a proper response. 
The second test verifies that if a query is sent for a movie with more then one hit (like Toy Story) it will display a total value greater the 1.