29 November 2012

Python and SST Frameworks


I updated some tests here to cover a website navigation.  Using SST was a bit more difficult then using Watir.  Mainly because SST doesn't have a lot of coverage for page elements that have no id's... so you have to get more creative. 

For example, they don't  have api methods to handle clicking on a link with only a class element.  So to get around that you  have to look for the text itself (which is usually a fragile idea) or use some other method to find the class and then click on it.

So here's some examples of me using SST with Python:
https://github.com/wbwarnerb/Python_Beachbody/blob/master/Pages/Home.py




25 November 2012

Getting a Python Web Automation Framework going

I've so far been able to get GEB (groovy based web automation) up and running at home... (although it was very difficult) and I've been able to get Cucumber up and running on a web automation stack (very easy) and tonight I think I got a Python stack up and running.

Python was a bit tricky.  Although not as easy to set up as Cucumber, it certainly isn't as difficult as GEB.

Some pitfalls with Python:
  1. You can't easily use Python 3.* Due to compatibility issues with selenium webdriver, Python 3 isn't really supported. I did see some 3rd party scripts to try and bridge that compatibility, but it looked like a dangerous path to go down.
  2. You need to install Python 2.7.3.
  3. After python, you'll need to install setuptools.  If on windows, setuptools has a nice windows installer.  This is the pre-requisite to get PIP (a python package installer) up and running.
  4. You will need a package installer (i.e. PIP) - if you're on windows, PIP's website doesn't really offer much help for you.  Windows users have to download https://raw.github.com/pypa/pip/master/contrib/get-pip.py directly since windows doesn't have curl natively.  
  5. Once you have the get-pip.py, you just run the thing... then pip.exe will be in your python*/Scripts folder. 
  6. Run pip install sst  OR use a IDE (i'm using jetbrains' Pycharm) to point to sst and install.
Once SST is installed and part of your project, you can write a simple test like this:
from sst.actions import *

go_to('http://www.ubuntu.com/')
assert_title_contains('Home | Ubuntu')
close_window()


Be sure to check the SST doc's for referencing their actions, but they are pretty straight forward and similar to native webdriver and selenium actions: http://testutils.org/sst/actions.html

Now you can start building out your framework... I've heard of some BDD python based layers (Nose and Lettuce) out there, but they aren't supported by my IDE (jetbrains see's little community backing BDD in python) so I haven't gone that route.

Uploaded a sample of this code at github:
https://github.com/wbwarnerb/Python_Beachbody

24 November 2012

Python 3 and webdriver incompatible?

I've been looking at this idea of using python with webdriver, in a automation framework.  It seems though that webdriver doesn't support Python 3.

This is pretty frustrating.  I'm mostly interested in languages from the point of view of automation.  I pick up a language and download the latest stable version, and find there is no support on it from any web automation framework... not even webdriver. 

So it looks like Python must be running version 2.* and not 3.* in order to access webdriver.


python basics - whitespace and indentations

This might be obvious to some - but I didn't know about indentations.  In Python indentations within a method, have to be the same.  So this will work:

def main():
    print("something to say")
    print("is something to say")

but this won't -
def main():
    print("something to say")
  print("is something to say")

But this will work:
def main():
    print("something to say")
print("is something to say")

The first example does indentation as Python expects within a method.
The second example fails because the indention is not the same.
The third example works, but will first run the line print("is something to say") and then execute the line in the method, if we have the line:
if __name__ == "__main__": main()
This is because the line is at the same level of the method def.  So it's run first.
Either way the third case is valid, as the last print is at the level of the method.

23 November 2012

python - calling functions before they are defined

Been picking up some python and I noticed that you can define a function in a non linear way... meaning the method call could come before defining the method:

def main():
      function2()

def function2():
     print("this is the second funciton.")

if __name__ == "__main__": main()

That last line there is saying run the file and then the main function after.  So function2() called in main gets defined first.  Then main is run, so it knows what function2() is.

11 November 2012

My Rails Comment Application

I'm taking some courses on Rails.  I built out a web application that uses some 3rd party api's like Devise (for authentication).  The web application is set up to let users create accounts, and post status updates.  I went ahead and published the early version of the app with Heroku:
http://infinite-waters-3206.herokuapp.com/

heroku rake db:migrate

When publishing a database to Heroku, remember to always use heroku rake db:migrate, or else the heroku rails app won't work.

09 November 2012

API Testing

API testing doesn't involve a GUI but the testing does come down to the same concepts of inputs and outputs.  You provide a specific input and you get the expected output.   Just like with any type of testing, the methodology is the same.  You would verify the Happy Path, and try and break the API (passing invalid parameters, too many parameters, etc. and verify errors are thrown gracefully, that databases are not updated with empty rows, or duplicate data... and son on.)

So what is an API?

An API is a interface you can use in development, it stands for Application Programming Interface.  Think of it as a building block of code. Rather then writing that code each time, you have a block of code that can take parameters and use those parameters to do something.  You could pass in parameters to create a user in a db, or perform a query and return data as JSON, or handle authentication (such as Facebook login api's.)  API's can be public or private. 

Verifying An API

Some examples to verify in API testing could be:
  1. What does the API return? If an API is designed to return a result (like data), then you would want to know that a) there is something returned and b) it's what's expected and c) it fits the boundaries of what's expected d) errors are handled correctly
  2. Event Listener Testing: If the API is making an event, you would have to have access to an event listener or event log and verify that the event is a) captured correctly b) it's in the expected format and c) has the correct payload d) errors are handled correctly
  3. Modifying Databases: If the API is making a DB insert or update, you would want to verify that a) the db update/insert/delete occurs b) the data that is changed is correct c) errors are handled correctly.
Early on at eHarmony I had the chance to write a simple API for testing. We were designing a new Automation framework and I needed some code to subscribe users.  This way I could pass in a userid, and it would subscribe this user for me programmaticly.  This was not a production API, but just something used in testing. We had test credit card numbers, that are not valid in production. But it lets us exercise the subscription flow.  So instead of calling the UI on the front end each time to run a subscription (which might take 2 min each test, just to sub a user via the GUI), I would subscribe a user using this simple API. 

What this API was doing was taking the userid as a parameter and passing it into a SQL insert to our test db.

Testing it was amounting to:
  1. The Happy Path: Does the data insert/update for a userid reach the db. New users would be inserted to the db, existing users would be updated. 
  2. Validate the data inserted: are all columns expected to be updated, updated?  Is the user flagged as a subscriber?
  3. Error Cases: What happens if an empty string is passed in? it should error gracefully, does it?  What happens if the user is already a subscriber. That would mean the insert would fail for new users.  Updates would pass.
  4. Events: In this case we didn't want to trigger any events. This should bypass the events, so just double check that no events are recorded in the event service logs.

How To Get Into API Testing

There's a lot of ways a person can pick up API testing on their own.  If you can write code in Rails/Ruby, Groovy, JAVA, etc, you could build your API and then test it!

But... there are easier ways.  You could find a open/public API and work with it.  Some examples of API's you could start with would be:
In fact you can find a lot of online API's that take data and return some sort of result to you.  I even found one for the card game, Magic the Gathering:
http://daccg.com/ajax_ccgsearch.php?cardname=jace%20beleren
You can simply set up a form to pass in a parmeter here (URL encoded of course) and get back some JSON with the result!

The above list is of course online API's, many of them restful service with API's you can hit with a URI endpoint.  But it is easy to get started right now!

08 November 2012

Speed of Cucumber

By day, I work with a framework called GEB, written in Groovy.  By night, I work in my own personal projects in Ruby/Rails and Cucumber.  For test frameworks, I build them in Cucumber.  I've built about 7 different test frameworks in Cucumber.  The latest was on a whim.  After dinner last night, I went to www.beachbody.com the makers of the P90 system, and I decided I'd try and automate their site. 

By the end of the night I had over 100 tests for their navigation.   Not only did I output 100 tests, but these tests work in any browser. 

Here's the tests I wrote last night:
https://github.com/wbwarnerb/cucumber_beachbody/blob/master/features/nav.feature

And here's the code they exercise:
https://github.com/wbwarnerb/cucumber_beachbody/blob/master/features/step_definitions/nav.rb

04 November 2012

Adding Randomness in Tests

Why Randomness?


When testing a UI and having a large selection of dynamic links, it can be helpful to get an idea of break points with the tests, as well as the code.  In other words, if you only write tests for specific points, you may miss some situations and scenarios that were not thought through.

My Google Example

I used Google.  I created some tests that:
  • go to google.com
  • enter a search term
  • click a random element on the page
To help with this example, I created a few tests in  order of complexity.   The first test simply goes to google, enters a term from a table and then gets results.  At the results list, it clicks the first result in the list.

Google's results tend to be hard to directly grab.  They have a list of results.  But to get to them (using WATIR notation) I came up with this (I also had to put a wait_for_present before this to wait for the element ol#rso to load.)  ol#rso is the ordered list of results, so I target it like this:
 @browser.ol(:id, "rso").li(:index=>0).div.h3.a.click

I'm grabbing all the li's and treating them like a list or array.  I use index 0 (WATIR notation) to find the first result row.  Then after getting it, I grab the chain of: div.h3.a and click it.

Adding Randomness

So far there's no randomness here, but the first test gets us going with results and clicking through on them.  Now for the randomness.

It's easy.  I counted the results per page, and came up with trying to get 16 choices.  My solution was to do this:
  @results = rand(15)
  @browser.ol(:id, "rso").li(:index=>@results).div.h3.a.click


Basically it's the same thing as before, except I added a instance variable of @results.  It just comes up with a random number up to 15. 

So now when @results is passed in as a index value it just picks the random row to click.

More Complexity

But lets say we want to take it further and randomly pick a search result page, and then pick a random result on the page. 

  @nav = rand(1..7)
  @browser.table(:id, "nav").tbody.tr.td(:index=>@nav).a.click


In this case I created a instance variable that picks a random result from a range of 1 to 7.  0 is not wanted, because by default, the user is already on the first result page (0.)  So no reason to click on it.  Just want to click from 1 to 7. 

After the results page is loaded, then I rerun the logic to pick a random result from the page.

Issues

I ran into some issues.  The first issue was that the rerunning of the random result on the page.  When I tried calling that again, it wouldn't wait properly.  So I had to recreate the same logic and call it again.

Built out a simple Web Service in Ruby/Rails

I've been getting back to Rails.

After reading through some tutorials and going through a variety of teaching material, I was able to put together a project:
https://github.com/wbwarnerb/restapi/

A lot of this was generated by rails scaffolding.  
But the core code I wrote is in:
https://github.com/wbwarnerb/restapi/blob/master/lib/gwapi.rb
and
https://github.com/wbwarnerb/restapi/blob/master/lib/gwapiclient.rb


03 November 2012

ADB Server start/stop

In playing with the Android SDK, I found some useful info to start and stop the Android client:

/android_sdk_path/platform-tools

adb kill-server - To kill the server forcefully
adb start-server - To start the server

02 November 2012

Some people hate BDD and Automation

I ran across this page recently - a blog by a Software QA tester named James Bach: http://www.satisfice.com/blog/archives/638
it's interesting. I also ran into another QA blog that had similar sentiments.  It's a backlash against the momentum of taking Software QA into a BDD/Automation driven realm.

I was once just like that. But my opinion changed drastically when I saw the benefits to writing code, using BDD and building an automation framework. 

eHarmony presented me with the opportunity to learn some code and be part of a Continuous Integration / Continuous Deployoment program.  In that program, I picked up the language Groovy - and we ultimately used a BDD framework in GEB/Spock.  GEB would be like Cucumber (a automation framework) and Spock is like Gherkin (the BDD language.)

An Example

James Bach's post though has some problems.  In his comparison he has a BDD example of ONE test of a epic arc (Validating an ATM machine), to a manual tester covering tons of "stuff" (i.e. an entire test plan.)  That's not very fair.  So let's make this fair. Lets talk apples to apples.

No BDD framework would have only ONE test.  It would have as many as our need to cover the entire scope of the code being developed.  For a website, for example, you may  have tests that cover:
  • UI Navigation
  • Subscription to the website
  • Communication between members of the website
  • Cancelations
  • Each Section of functionality
  • Advertising
  • Special events (free communication weekends)
  • Registration
Your site/company may have different requirements, but here's a few off the top of my head.  We would never sum up a single BDD test to cover the entire functioning site.  Who would?

Let's take an example, like a form based Registration.  Simple example. Registration is just a form with fields. We would break this down to individual tests (these tests are still the same whether you are using BDD/Manual Testing/Automation Testing):
  •  There would be a test on how to send the data perhaps (json? service call? form post?) However we are capturing data, we have a test for it and verify the data is captured
  • Another test might be front end, happy path: fill out the form in the UI and submit. Then verify the data was captured
  • Another test might be violate the fields with white space
  • Another test might be passing in invalid data (i.e. invalid postal code)
  • Another test might be double clicking submit and verifying only one data entry is captured
  • Another test might be using words on a blocked list
On and on this can go, till we have flushed out our test plan.  In other words, we'd have multiple BDD's to cover each test above.  Just like you would have multiple test cases in a test plan to cover the above areas of tests.  That's the key. James is glossing over the fact that this is all part of a structured test plan.  Whether the manual tester writes an official test plan or not, they are in fact going through a series of steps, covering specific areas and validating input/outputs.  The same thing as an automated test would do.

Lets say we find some bugs while doing some manual testing (yes manual testing is still used).  Say we discover that if we enter special characters for the name field, that they are accepted, when they shouldn't be. So we open a bug on it.  We'll, we also create an automation test to cover each bug found. That is then added to the BDD Feature.

What about cases where the page is dynamic, so inputting one value would get multiple dynamic changes on the page?  The tester, as Bach points out is testing all that. So is the BDD test.  You just create BDD's for each change expected.  It's basic input/output.

Unlike Bach's example where he had a solo BDD test (not considering the entire Feature), and then compared the manual testing of a fully flushed out test plan (which wasn't a fair comparison), I'm going to be much more fair here.  The reality is that the BDD tests should match the test plan and test scenarios manual testers would be doing anyway.  There should be little difference.

In other words: You still have to write a test plan (whether you manually test, use BDD automation, or use a non BDD automation strategy.)  No matter what you need to structure your tests and report your results.  Every tester should agree with that.

Once you agree with that, it comes down to apples and apples. We're now comparing the same thing. The same manual test plan vs. the same automation test plan.

James mistakenly thinks that BDD just focus' on some simple subset of testing. You should have a BDD test to cover each test case... each test case in your test plan, validating the same results you would validate in manual testing.

Efficiencies


James Bach in his responses to comments on his blog post, says "James' Reply: The idea that it helps you release faster is a fantasy based on the supposed value of regression check automation. If you think regression checks will help you, you can of course put them in place and it's not necessarily expensive to do that (although if you automation through the GUI then you will discover that it IS expensive).
BDD, as such, is just not needed. But again, if it's cheap to do, you might want to play with it. The problem I have is that I don't think it IS cheap to do in many cases. I'm deeply familiar with the problems of writing fixtures to attach high level "executable specs" to the actual product. It can be a whole lot of plumbing to write. And the temptation will be to write a lot less plumbing and to end up with highly simplified checks. The simpler they are, the less value they have over a human tester who can "just do it."

James is calling out efficiencies here.

But he betrays himself. He states "if you think regression checks will help you, you can of course put them in place."  I've never heard of regression as a option. It's never an option. You MUST have regression.  James knows that's the shining star of automation and is trying to downplay it to bolster his position.  How could any QA team not do automation?  Many issues, in my experience (and I dare say in general software development) are breaks to previously existing code, due to commits to new features.  The new features get a commit to trunk and the commit unkowningly stifles an event that is needed for another feature - or changes a base class, or property file and something seemingly unrelated snaps.  Regression is NECESSARY.  It's never "well if you want it..." 

As we read on we see he's claiming that it's more efficient to "just do it."  That there's just too much overhead in writing all that plumbing.    If you are testing a new feature, you don't have the luxury of knowing all the ins and outs of it. You have to start from scratch. In that moment you can also write code.  That's right, WRITE THE CODE BEFORE YOU GET THE FEATURE.  When you go to planning for a new feature, the developers will need time.  As they spend time to write code, you will write out your code and test cases. BDD makes this easy - YOUR TEST CASES BECOME YOUR CODE.  How cool is that?

Meaning you write out your test plan in BDD.  At eHarmony we use GEB and the BDD is added in planning into each story.  But this could be Cucumber.  You plan it out, copy the BDD to a Cucumber feature file, for example, and then write the code to work against.

BUT, say some, There isn't any development yet, how could I possibly code for it?  Good question.  That's why you work with your developers hand in hand.  You know the URL you're hitting.  You know the basics, but you won't know the div's in the page. The Class's and ID's you're trying to select.  You may not know the property file name, etc. But you work with the developer and agree on the naming conventions and write your test BEFORE you even get the code.

RED GREEN CLEAN

That's what red, green, clean means. You write the test before you get code - so it fails.  You get the code (if it works) it goes green (if your test needs changes you make the changes and see it go green - or report the bugs to the developer) and finally you refactor (clean) the tests.

In this sense, you are literally building the "plumbing" as James Bach calls it, while you are writing the test plan. Is it really that hard? Cucumber makes it easy. As easy as writing text. 

Of course if you can't write test plans, and you just "wing it" - then this will be very horrible for you. But then you aren't really testing with quality.

HOW BDD HELPS

So how does BDD help?  As I went through James Bach's comments and posts - he suggests it doesn't help at all.  I'll respond with my take on how it does  help.

Test Features become Test Plan Repositories

As you build upon your test feature set in Cucumber, you'll have a growing repository of your entire test plan history.  The feature files are easily human readable.  No code.  The step definitions that they reference/call is where the code goes. So Cucumber feature files contain no code. That means anyone in business can read the test and understand it.

Rapidly Reusable Tests

Tests can be quickly run and rerun.  You don't need to call someone at 10pm to run through a test, or read some testers documentation on how to do something you've never done before. 

Anyone can kick off the tests

Plug cucumber into Jenkins and you have a user interface to simply kick off a test repository against a test environment. No special QA person or Deploy person needed.
Everyone can see the failures and understand them

Builds Tests that Anyone Can Read

If you're writing a  automation framework without BDD, you'll have tests that no one will want to look at, except coders.  You'll have code, with little guidance of the test - save for the occasional comment.  This happens when people take Java and Selenium and try and make a automation framework just from the two.

Tests are as readable as a test plan

Again, the tests are easy to read. Each Scenario ads to the overall feature.  You can organize your features by sections, pages, functionality, etc.  But the feature files are simple to read and understand what's going on.

Data Validation

In Cucumber and GEB, you can build data tables into the test, to rapidly run a test multiple iterations and pass through a variety of values and verify the result.  This is faster then humanly doing this.  I have tons of examples of this on github.  You can have a table of 50 movie titles that you are passing into a service end point, and validating the JSON data returned for each title.  Validating the MPAA RATING, jpg img path, etc.  This test can be kicked off and finish in less then a min. 





Failures are obvious

When tests are run from a tool like Jenkins, and a test fails, the failures are obvious. The failure state is captured as the BDD section the test stops at.  Example, if a test fails at "Given a user is at http://www.google.com" then you know the failure is at that point.  Further details will be in the error message (such as), "time out waiting for page to load."  So you may have a network or proxy error.  Even a non technical person can get an idea of what's going on .

HOW AUTOMATION HELPS

Rapid regression

Unlike James Bach's feeling that regression isn't a necessity, I feel it always is a necessity. Unless your business is making one off pages, I can't see how you would never reuse code!  If you reuse any code, you MUST REGRESS.

At one job, we had two week core code deployments.  This was a major deployment of a large web application/site.  Both front end and back end code, as well as services might be deployed.  Manually regression the trunk for each deployment would take 6-8 testers, something like 4-5 days to cover all regression in 5 browsers (FF, IE8, IE9, Chrome, Safari) + iPad Safari.  Not to mention mobile application regression.  That's a lot of testing.

We would build out a task list, assign people to the tasks, and then have a grid. Each column in the grid was  a browser.  Now figure each test you do, you have to redo 5 times.  Why? Because were a customer focused company.  People gave us money. If they use Mac Safari and have a issue, it's a problem for us. If they use IE8 and can't subscribe, we loose money.  We must cover regression in all supported browsers.

If, instead, we cover regression with automation, the automation might take 6-8 hours to cover the entire site, but that pales in comparison to 4-5 days!

Understanding

When you manually test, you are most likely always in the "black box." That's not necessarily bad, but you don't have any idea how something is made, or how it is working. The more you know, the better edge cases you can find.

Capybara and jQuery

One of the nice things about Cucumber, is the variety of different elements you can add to the stack.  Typically you have in an automation framework:
1. base language
2. framework itself
3. web driving technology (for front end tests)

In GEB it's like this:
1. Groovy
2. GEB (which adds Spock for the BDD ability)

In Cucumber you have a lot of choice  here:
1. Ruby/Java/Groovy/etc
2. Cucumber (which has Gherkin for the BDD ability)
3. Watir/Capybara/Webrat
4. you can throw other stuff on the stack, like Rspec, etc.

I was asked recently, "why do you use Watir for your Cucumber/Ruby tests?"  I basically answered "well because it was there."  It was the first web driving/controlling element to the stack I used.  There are other choices, there is Capybara for example.

While I'm new to Capybara, there are some interesting things I've found with it.  One of the nice things is the ability to execute scripts.  I don't know how to do this with Watir (and hence I'm assuming it's not possible) to execute Javascript or JQuery. 

But in Capybara you can do something like this (examples taken from jnicklas):
page.execute_script("$('body').empty()")
 
Sure enough, Ruby can be used to do calculations (so I'll omit that example.)
What's great about this, is that I can run jQuery.... and what's so great about jQuery?  Well if you use Firebug in firefox, it has a console that lets you run/execute jQuery in the page.  Sometimes elements in dynamic pages, can be a trick to find and manipulate.  I find this true with Microsoft pages for example (like windows.com) - jQuery makes this easier... here's why:

  1. If you work on a team,  you have a front end dev who's usually a wizard of jQuery... so such a person can easily give you query or improve your own to find or manipulate elements on the page
  2. I have found situations where Watir has such a hard time finding an element, and jQuery finds the element fine
  3. jQuery is tried and true and  has a strong community backing it
  4. Firefox lets you run jQuery from the firebug console... this way you can verify what elements you can manipulate and how you can access them.
You can find more about jQuery at: http://docs.jquery.com/

Capybara lets a person run jQuery by simply doing a: page.execute_script and then passing the jQuery in.

Again, maybe Watir has a method of doing this, I haven't found that to be the case though - so when I get in a bind, I use Capybara's execute_script functionality to run jQuery as needed.

The testing stack I personally use for automation allows for using Capybara, along with Watir's easier (imo) way of managing the browsers:
group :test, :development do
  gem 'cucumber-rails'
  gem 'database_cleaner'
  gem 'rspec'
  gem 'spork'
  gem 'capybara'
  gem 'watir-webdriver'
  gem "gherkin", "~> 2.11.2"
end

Everything is an object

One thing I really love about the Ruby/Cucumber framework, is that everything is an object. I'm allowed to do something like this:

divlength = @browser.div(:id=>"divValue").wait_until_present(5).length

I can chain actions/methods to each other real easily... again like
@browser.div(:id=>"search-results-container").wait_until_present(5).click

This is really cool when trying to click a "a" tag in a div like the html might be:
<div id="div1" class="containerdiv" name="links">
   <a href="http://www.someplace.com">click here</a>
</div>

Since the a tag doesn't have a id or class, you can still reference it by chaining it like so (using Watir syntax):
browser.div(:id=>"div1").a.click

It's logical and makes sense. It's also pretty easy.

It's letting me treat the div(:id=>"div1") as an object that I'm applying a method handler of "a" to, which is saying "hey look for the a tag" and then that itself is treated as an object and lets me apply the "click" method to it saying "ok now click that."  You could throw a wait_until_present method in front of the click, to give some time for the tag to load on the page, etc. 

But it gives a great example of everything being an object.