I ran across this page recently - a blog by a Software QA tester named James Bach: http://www.satisfice.com/blog/archives/638
it's interesting. I also ran into another QA blog that had similar sentiments. It's a backlash against the momentum of taking Software QA into a BDD/Automation driven realm.
I was once just like that. But my opinion changed drastically when I saw the benefits to writing code, using BDD and building an automation framework.
eHarmony presented me with the opportunity to learn some code and be part of a Continuous Integration / Continuous Deployoment program. In that program, I picked up the language Groovy - and we ultimately used a BDD framework in GEB/Spock. GEB would be like Cucumber (a automation framework) and Spock is like Gherkin (the BDD language.)
No BDD framework would have only ONE test. It would have as many as our need to cover the entire scope of the code being developed. For a website, for example, you may have tests that cover:
Let's take an example, like a form based Registration. Simple example. Registration is just a form with fields. We would break this down to individual tests (these tests are still the same whether you are using BDD/Manual Testing/Automation Testing):
Lets say we find some bugs while doing some manual testing (yes manual testing is still used). Say we discover that if we enter special characters for the name field, that they are accepted, when they shouldn't be. So we open a bug on it. We'll, we also create an automation test to cover each bug found. That is then added to the BDD Feature.
What about cases where the page is dynamic, so inputting one value would get multiple dynamic changes on the page? The tester, as Bach points out is testing all that. So is the BDD test. You just create BDD's for each change expected. It's basic input/output.
Unlike Bach's example where he had a solo BDD test (not considering the entire Feature), and then compared the manual testing of a fully flushed out test plan (which wasn't a fair comparison), I'm going to be much more fair here. The reality is that the BDD tests should match the test plan and test scenarios manual testers would be doing anyway. There should be little difference.
In other words: You still have to write a test plan (whether you manually test, use BDD automation, or use a non BDD automation strategy.) No matter what you need to structure your tests and report your results. Every tester should agree with that.
Once you agree with that, it comes down to apples and apples. We're now comparing the same thing. The same manual test plan vs. the same automation test plan.
James mistakenly thinks that BDD just focus' on some simple subset of testing. You should have a BDD test to cover each test case... each test case in your test plan, validating the same results you would validate in manual testing.
James Bach in his responses to comments on his blog post, says "James' Reply: The idea that it helps you release faster is a fantasy based on the supposed value of regression check automation. If you think regression checks will help you, you can of course put them in place and it's not necessarily expensive to do that (although if you automation through the GUI then you will discover that it IS expensive).
BDD, as such, is just not needed. But again, if it's cheap to do, you might want to play with it. The problem I have is that I don't think it IS cheap to do in many cases. I'm deeply familiar with the problems of writing fixtures to attach high level "executable specs" to the actual product. It can be a whole lot of plumbing to write. And the temptation will be to write a lot less plumbing and to end up with highly simplified checks. The simpler they are, the less value they have over a human tester who can "just do it."
James is calling out efficiencies here.
But he betrays himself. He states "if you think regression checks will help you, you can of course put them in place." I've never heard of regression as a option. It's never an option. You MUST have regression. James knows that's the shining star of automation and is trying to downplay it to bolster his position. How could any QA team not do automation? Many issues, in my experience (and I dare say in general software development) are breaks to previously existing code, due to commits to new features. The new features get a commit to trunk and the commit unkowningly stifles an event that is needed for another feature - or changes a base class, or property file and something seemingly unrelated snaps. Regression is NECESSARY. It's never "well if you want it..."
As we read on we see he's claiming that it's more efficient to "just do it." That there's just too much overhead in writing all that plumbing. If you are testing a new feature, you don't have the luxury of knowing all the ins and outs of it. You have to start from scratch. In that moment you can also write code. That's right, WRITE THE CODE BEFORE YOU GET THE FEATURE. When you go to planning for a new feature, the developers will need time. As they spend time to write code, you will write out your code and test cases. BDD makes this easy - YOUR TEST CASES BECOME YOUR CODE. How cool is that?
Meaning you write out your test plan in BDD. At eHarmony we use GEB and the BDD is added in planning into each story. But this could be Cucumber. You plan it out, copy the BDD to a Cucumber feature file, for example, and then write the code to work against.
BUT, say some, There isn't any development yet, how could I possibly code for it? Good question. That's why you work with your developers hand in hand. You know the URL you're hitting. You know the basics, but you won't know the div's in the page. The Class's and ID's you're trying to select. You may not know the property file name, etc. But you work with the developer and agree on the naming conventions and write your test BEFORE you even get the code.
In this sense, you are literally building the "plumbing" as James Bach calls it, while you are writing the test plan. Is it really that hard? Cucumber makes it easy. As easy as writing text.
Of course if you can't write test plans, and you just "wing it" - then this will be very horrible for you. But then you aren't really testing with quality.
Everyone can see the failures and understand them
When tests are run from a tool like Jenkins, and a test fails, the failures are obvious. The failure state is captured as the BDD section the test stops at. Example, if a test fails at "Given a user is at http://www.google.com" then you know the failure is at that point. Further details will be in the error message (such as), "time out waiting for page to load." So you may have a network or proxy error. Even a non technical person can get an idea of what's going on .
At one job, we had two week core code deployments. This was a major deployment of a large web application/site. Both front end and back end code, as well as services might be deployed. Manually regression the trunk for each deployment would take 6-8 testers, something like 4-5 days to cover all regression in 5 browsers (FF, IE8, IE9, Chrome, Safari) + iPad Safari. Not to mention mobile application regression. That's a lot of testing.
We would build out a task list, assign people to the tasks, and then have a grid. Each column in the grid was a browser. Now figure each test you do, you have to redo 5 times. Why? Because were a customer focused company. People gave us money. If they use Mac Safari and have a issue, it's a problem for us. If they use IE8 and can't subscribe, we loose money. We must cover regression in all supported browsers.
If, instead, we cover regression with automation, the automation might take 6-8 hours to cover the entire site, but that pales in comparison to 4-5 days!
it's interesting. I also ran into another QA blog that had similar sentiments. It's a backlash against the momentum of taking Software QA into a BDD/Automation driven realm.
I was once just like that. But my opinion changed drastically when I saw the benefits to writing code, using BDD and building an automation framework.
eHarmony presented me with the opportunity to learn some code and be part of a Continuous Integration / Continuous Deployoment program. In that program, I picked up the language Groovy - and we ultimately used a BDD framework in GEB/Spock. GEB would be like Cucumber (a automation framework) and Spock is like Gherkin (the BDD language.)
An Example
James Bach's post though has some problems. In his comparison he has a BDD example of ONE test of a epic arc (Validating an ATM machine), to a manual tester covering tons of "stuff" (i.e. an entire test plan.) That's not very fair. So let's make this fair. Lets talk apples to apples.No BDD framework would have only ONE test. It would have as many as our need to cover the entire scope of the code being developed. For a website, for example, you may have tests that cover:
- UI Navigation
- Subscription to the website
- Communication between members of the website
- Cancelations
- Each Section of functionality
- Advertising
- Special events (free communication weekends)
- Registration
Let's take an example, like a form based Registration. Simple example. Registration is just a form with fields. We would break this down to individual tests (these tests are still the same whether you are using BDD/Manual Testing/Automation Testing):
- There would be a test on how to send the data perhaps (json? service call? form post?) However we are capturing data, we have a test for it and verify the data is captured
- Another test might be front end, happy path: fill out the form in the UI and submit. Then verify the data was captured
- Another test might be violate the fields with white space
- Another test might be passing in invalid data (i.e. invalid postal code)
- Another test might be double clicking submit and verifying only one data entry is captured
- Another test might be using words on a blocked list
Lets say we find some bugs while doing some manual testing (yes manual testing is still used). Say we discover that if we enter special characters for the name field, that they are accepted, when they shouldn't be. So we open a bug on it. We'll, we also create an automation test to cover each bug found. That is then added to the BDD Feature.
What about cases where the page is dynamic, so inputting one value would get multiple dynamic changes on the page? The tester, as Bach points out is testing all that. So is the BDD test. You just create BDD's for each change expected. It's basic input/output.
Unlike Bach's example where he had a solo BDD test (not considering the entire Feature), and then compared the manual testing of a fully flushed out test plan (which wasn't a fair comparison), I'm going to be much more fair here. The reality is that the BDD tests should match the test plan and test scenarios manual testers would be doing anyway. There should be little difference.
In other words: You still have to write a test plan (whether you manually test, use BDD automation, or use a non BDD automation strategy.) No matter what you need to structure your tests and report your results. Every tester should agree with that.
Once you agree with that, it comes down to apples and apples. We're now comparing the same thing. The same manual test plan vs. the same automation test plan.
James mistakenly thinks that BDD just focus' on some simple subset of testing. You should have a BDD test to cover each test case... each test case in your test plan, validating the same results you would validate in manual testing.
Efficiencies
James Bach in his responses to comments on his blog post, says "James' Reply: The idea that it helps you release faster is a fantasy based on the supposed value of regression check automation. If you think regression checks will help you, you can of course put them in place and it's not necessarily expensive to do that (although if you automation through the GUI then you will discover that it IS expensive).
BDD, as such, is just not needed. But again, if it's cheap to do, you might want to play with it. The problem I have is that I don't think it IS cheap to do in many cases. I'm deeply familiar with the problems of writing fixtures to attach high level "executable specs" to the actual product. It can be a whole lot of plumbing to write. And the temptation will be to write a lot less plumbing and to end up with highly simplified checks. The simpler they are, the less value they have over a human tester who can "just do it."
James is calling out efficiencies here.
But he betrays himself. He states "if you think regression checks will help you, you can of course put them in place." I've never heard of regression as a option. It's never an option. You MUST have regression. James knows that's the shining star of automation and is trying to downplay it to bolster his position. How could any QA team not do automation? Many issues, in my experience (and I dare say in general software development) are breaks to previously existing code, due to commits to new features. The new features get a commit to trunk and the commit unkowningly stifles an event that is needed for another feature - or changes a base class, or property file and something seemingly unrelated snaps. Regression is NECESSARY. It's never "well if you want it..."
As we read on we see he's claiming that it's more efficient to "just do it." That there's just too much overhead in writing all that plumbing. If you are testing a new feature, you don't have the luxury of knowing all the ins and outs of it. You have to start from scratch. In that moment you can also write code. That's right, WRITE THE CODE BEFORE YOU GET THE FEATURE. When you go to planning for a new feature, the developers will need time. As they spend time to write code, you will write out your code and test cases. BDD makes this easy - YOUR TEST CASES BECOME YOUR CODE. How cool is that?
Meaning you write out your test plan in BDD. At eHarmony we use GEB and the BDD is added in planning into each story. But this could be Cucumber. You plan it out, copy the BDD to a Cucumber feature file, for example, and then write the code to work against.
BUT, say some, There isn't any development yet, how could I possibly code for it? Good question. That's why you work with your developers hand in hand. You know the URL you're hitting. You know the basics, but you won't know the div's in the page. The Class's and ID's you're trying to select. You may not know the property file name, etc. But you work with the developer and agree on the naming conventions and write your test BEFORE you even get the code.
RED GREEN CLEAN
That's what red, green, clean means. You write the test before you get code - so it fails. You get the code (if it works) it goes green (if your test needs changes you make the changes and see it go green - or report the bugs to the developer) and finally you refactor (clean) the tests.In this sense, you are literally building the "plumbing" as James Bach calls it, while you are writing the test plan. Is it really that hard? Cucumber makes it easy. As easy as writing text.
Of course if you can't write test plans, and you just "wing it" - then this will be very horrible for you. But then you aren't really testing with quality.
HOW BDD HELPS
So how does BDD help? As I went through James Bach's comments and posts - he suggests it doesn't help at all. I'll respond with my take on how it does help.Test Features become Test Plan Repositories
As you build upon your test feature set in Cucumber, you'll have a growing repository of your entire test plan history. The feature files are easily human readable. No code. The step definitions that they reference/call is where the code goes. So Cucumber feature files contain no code. That means anyone in business can read the test and understand it.Rapidly Reusable Tests
Tests can be quickly run and rerun. You don't need to call someone at 10pm to run through a test, or read some testers documentation on how to do something you've never done before.Anyone can kick off the tests
Plug cucumber into Jenkins and you have a user interface to simply kick off a test repository against a test environment. No special QA person or Deploy person needed.Everyone can see the failures and understand them
Builds Tests that Anyone Can Read
If you're writing a automation framework without BDD, you'll have tests that no one will want to look at, except coders. You'll have code, with little guidance of the test - save for the occasional comment. This happens when people take Java and Selenium and try and make a automation framework just from the two.Tests are as readable as a test plan
Again, the tests are easy to read. Each Scenario ads to the overall feature. You can organize your features by sections, pages, functionality, etc. But the feature files are simple to read and understand what's going on.Data Validation
In Cucumber and GEB, you can build data tables into the test, to rapidly run a test multiple iterations and pass through a variety of values and verify the result. This is faster then humanly doing this. I have tons of examples of this on github. You can have a table of 50 movie titles that you are passing into a service end point, and validating the JSON data returned for each title. Validating the MPAA RATING, jpg img path, etc. This test can be kicked off and finish in less then a min.
Failures are obvious
When tests are run from a tool like Jenkins, and a test fails, the failures are obvious. The failure state is captured as the BDD section the test stops at. Example, if a test fails at "Given a user is at http://www.google.com" then you know the failure is at that point. Further details will be in the error message (such as), "time out waiting for page to load." So you may have a network or proxy error. Even a non technical person can get an idea of what's going on .HOW AUTOMATION HELPS
Rapid regression
Unlike James Bach's feeling that regression isn't a necessity, I feel it always is a necessity. Unless your business is making one off pages, I can't see how you would never reuse code! If you reuse any code, you MUST REGRESS.At one job, we had two week core code deployments. This was a major deployment of a large web application/site. Both front end and back end code, as well as services might be deployed. Manually regression the trunk for each deployment would take 6-8 testers, something like 4-5 days to cover all regression in 5 browsers (FF, IE8, IE9, Chrome, Safari) + iPad Safari. Not to mention mobile application regression. That's a lot of testing.
We would build out a task list, assign people to the tasks, and then have a grid. Each column in the grid was a browser. Now figure each test you do, you have to redo 5 times. Why? Because were a customer focused company. People gave us money. If they use Mac Safari and have a issue, it's a problem for us. If they use IE8 and can't subscribe, we loose money. We must cover regression in all supported browsers.
If, instead, we cover regression with automation, the automation might take 6-8 hours to cover the entire site, but that pales in comparison to 4-5 days!
As I wrote to James... You have some good points about automation but the moment you think that you are safe just because you have automation in place, you will fail.
ReplyDeleteAutomation will never be able to find you new bugs. It will never be able to replace the human fantasy or desire to break the system or just test it. If you think like that then you will find yourself fast in a no mans land.
I have a project where there are many many things automated with BDD. Yet 1 day of exploratory testing surfaced 10-15 new bugs.
Also if you have Gherkin files cluttered with tables and data end edge cases and what nots they will loose there ability of expression and being a description of your system.
Automation has merits. James never declined that. However manual teting, and not just Black Box testing as you would state, has invaluable information and merit. Dismissing it is a path to the dark side.
The second half of my reply is this:
DeleteI don't understand your point about Gherkin... as there is no "gherkin files", but rather Gherkin is the Cucumber language to handle "Given/When/Thens" of Dan North's BDD process. The data tables are called Scenario Outlines... and I don't understand your point no how we loose the ability of expression and being a description of your system.
The scenario outline doesn't modify the test to make it loose it's definition. It keeps the test intact, but allows the same test to iterate on different data. Example: a test to fill out a form. The test could originally use English. But I could add a Data Table to run the same test to enter other languages. so the first run does it in English, the second in Spanish, the third in French, etc. Nothing is lost. The spec is the same.
The problem with relying too much on human black box testers as Karlo wants - or on say tech manual testers, as you describe, is human burn out. At eHarmony we had a release every two weeks to prod. I managed the offsite manual testers who regressed this. As they regressed a huge amount of the site and functionality in 6 browsers, they would start to get test blindness.
An error that is obvious is lost on them, because they have run these same tests every two weeks, over and over in multiple browsers.
While automation has it's faults in that it only can run what was programmed. the Human also has faults in that it will burn out doing the same test.
So we get creative, we rotate out testers, or rotate the tests... only to find the same problems occur more and more.
I've seen nothing but benefit in having technical QA... and I've seen nothing but benefit in adding automation infrastructures. How else can you handle regressions in tight turn arounds?
Companies today want to release sooner, with more coverage. How can you get 3 guys to fully run through regression, only to find we need a new build and they have to redue this 3 day process, but now only have 10hours? Automation greatly helps.
Automations gold stars, are a) running the UI on each dev comit to ensure dev quality and b) verifying regression ad hoc, without human burn out.
This was actually the first part of my reply to you... it got lost... i'm moving it here:
DeleteGergely, there is never safety in QA (automaiton, or manual testing)
There seems to be this idea that "we're either a full on black box tester, or a full on dev trying to be QA." That's erroneous.
No where did I say "automation rules the world." But you can't be productive anymore in a Agile team without it. Nor does that mean that manual QA isn't needed.
When I started the gig I'm at now: They had no QA, no automation, no test plans, nothing! :) I sat down and started going through their sprint stories. I wrote Given/When/Thens for each story... I would roll through the UI as I wrote them, in the process I'm manually testing the functionality and exploratory testing (to Karlo's point)... I'm also building new questions on how their app works. As I build out more G/W/T's to cover positive, negative, edge cases, I'm building the entire test harness/plan.
Same process we'd do if we just were manually testing. If I had a situation where BDD's were written and in 1 day of exploratory testing I found 10-15 defects. I'd have to say the test methodology is flawed - not BDD, not automation, but the core is flawed.
The Automated or BDD process should be the same as a manual process. If you take a story and write one BDD to cover a happy path - you've failed IMO. The BDD needs to cover all possible scenarios you can think of (i.e. happy path, negative cases, exploratory.) If you come up with a new idea (I.e. like "what happens if I pre pend white space to this text field on submit."), add it to the G/W/T. If you don't document your exploratory testing, then the testing is very brittle. If you document your testing, why not document it in G/W/T form that can be automated?
I then add code to automate the G/W/T. No where in this process did I a) just do automation, b) just do manual testing. c) ignore exploratory testing. The proper use of BDD should handle all of these things.
There's a serious danger of human burnout that I'll mention in a sep blog post.
ReplyDelete
Hi Brian,
ReplyDeleteI must object at your final quote:
"""When you manually test, you are most likely always in the "black box." That's not necessarily bad, but you don't have any idea how something is made, or how it is working."""
Good tester (tester that James had in mind) do not need to know """How something is made""" in order to test the product. This is typical developer impression of tester. Tester can find a lot of bugs without that knowledge.
However, good tester will ask questions in order to find out how something is made. And in that process, tester will also find a lot of bugs.
Using exploratory testing (http://www.developsense.com/resources.html#exploratory) tester will find out how product is actually working, or does not working.
Problem with BDD is that while you are doing it, you are developer, not a tester. Tester job is to find out as many problems as possible (in time and budget constraint). Tester will document its finding by writing bug/issue reports. DEVELOPER automates those findings/bug reports, NOT TESTER.
Regards, Karlo.
Karlo - BDD is behavior driven.. It is by it's nature testing the behavior of the end user, in the process of writing the BDD's.
DeleteWhile BDD does link with development, the BDD's language itself is not Development. They are keywords meant to describe stages of the user experience: Given a user is at this state, when the user does this, then this is the result. That interwoven net of BDD's creates the test harness for a QA Tester using BDD. Said tester, may not write one lick of code. But they will fully understand the web application through the process. In the writing of the BDD's (if done properly) you should find questions that would be proposed to the business unit and further refine the harness.
There is nothing wrong with exploratory testing. Nor did I say there was. I of course know what exploratory testing is.
If someone came to interview for me with the attitude you wrote: I'd peg them as a Jr. QA Analyst. I expect much more from anyone in an onsite QA position. A good QA tester grows technically. They don't sit on the front end and here's why:
If you know how the web app architecture works, you'll know better how to test it to break it (I.e. you'll be a better exploratory tester.)
When I worked at eHarmony, we had a feature that used Four services. Each service and a URI end point that data was passed to. Those testers we farmed the front end UI (black box) to, would report bugs as "this doesn't work" "not seeing expected result" but where is the break? A good QA Engineer would say "I noticed I'm getting a 500 error from the 2nd service in this chain..." That level of detail aids the developers and gives you more insight into better exploratory testing.
Whether you are JR. QA, a QA Analyst or a QA Eng., I expected anyone working for me on a team to do exploratory testing, bug/defect tracking and proper test writing. The more technical the person, the better exploratory testing they do. Hands down. Development teams will not work with testers anymore who say "shit broke" they want someone who can say "This is failing, due to a problem with the Data being returned from the service..." it narrows the scope of Dev resolution.
For your own benefit, becoming more technical will drastically benefit your carrier. A QA Analyst in L.A. makes between 50-80k (there are outliers, but this is the average I see.) A QA Eng with a good tech foundation can land jobs 100k+. A buddy of mine went to work at a major streaming company in San Francisco - he started at 150k.
Also consider that lately the job market in the US is so competitive with QA, you HAVE to have a technical background to even land a job.
Brian,
DeleteThe last word in BDD is development. And I object that BDD is not coding. You have to write BDD code. And James objection is that tester is constrained with the BDD language.
Do you expect that users of your application will use it by writing BDD code? No, they will use keyboard, mouse, eyes, ears. What about users that are blind? They are using your application not because they like how your application is made (which technology is used). They will use your application because they want to fulfill some of their needs (e.g. I am using Facebook web application because I want to send a message to friend that also uses Facebook, not because I am crazy about Facebook web application.)
You probably have experience working with bad testers, because only bad tester will do such bug report (this is not working). But, with your impression what is good tester, that does not surprise me. Good tester will give as much details as possible in its trouble reports. As I stated before, he does not need to know from the start how something is made. He will ask questions to find out in more details where things went wrong. In your case, I know that I would hit F12 in Chrome browser to get more details about the error.
For whole team is better that in the beginning tester do not know how something is made. He will discover that by testing and asking questions.
Testers has a great amount of resources in order to elevate their skill. Very good starting point is "The little black book on test design".
And you got wrong impression about my technical knowledge. I picked a lot of web technology in my testing career. Which does not excludes me, for example, from testing business process in my local drug store. When I wanted to return some item after I purchased it I found serious bug that had legal consequences for my drug store. And I did not have great economy education.
And mixing QA and testing profession is so wrong. Here you can find more details about that topic http://goo.gl/jO9p9.
I am not the one that hates BDD. BDD is great toll for communication between testers, developers, product owners. I just hate when testers have to do it, instead of developers. Testers must find as many product problems in the budget constraint they have. And tester has to learn topics from
"The little black book on test design", they do not have time to learn BDD. Testers will tell developers what to code in BDD (I am quoting Gojo Adzic here), in order to be safe on regression side.
Karlo - you still don't get BDD. Yeah you know the last word is development, but BDD has two phases. The test phase is writing a test, much like writing a test plan. It's the same thing. In human language you write:
DeleteScenario: testing a login form on my website
Given a user who is at the login page
When they fill out the form with valid data
And click submit
Then they have logged in
Given a user who is at the login page
When they attempt to login with bad data
Then they are refused login
etc. Do you see any code there? No. This is the planning phase. QA doesn't use BDD to dev code for production. We use it for Specifications: http://en.wikipedia.org/wiki/Behavior-driven_development go to that link and scroll down to the Specifications examples.
It's just like writing a old fashioned test plan. Once you have a BDD test plan in place, you can easily convert it to code via Cucumber. In a sep. file called a step_definition, you put the code in to drive the browser to simulate each step.
your annology to end users using BDD is showing your lack of understanding and your resistance to a concept, just out of spite. BDD is not used by End users, BDD replicates End user Behavior! that's the whole point of Dan North's BDD system. Come on man, use your head. Obviously end users aren't writing test plans. But these test plans reflect the End User behavior...
Karlo, maybe you're a good QA Analyist. But since you asked, what I expect from a "Good" tester is to do more then hit F12 or check out Firebug. Here's what I expect and what the industry expects if you have a problem:
a) if data is involved you checked the db
b) you can set data up in sql or nonsql db's
c) you know how services work
d) you can ssh to a service and verify the log files and report back any exceptions thrown (this is a pain point with me, because most manual testers we have hired do not like or know how to use Linux to verify log files)
e) check cross browsers
I don't expect a good QA person to know how to automate or all of the above in my list - but they must be willing to learn these things to be effective testers today.
This comment has been removed by the author.
ReplyDeleteThis comment has been removed by the author.
ReplyDeleteWell... a test is not a test just because you call it so. If you write the "test" first, then make an implementation for it to pass -- that's some kind of executable design control, not a test.
ReplyDeleteTesting is a task where you objectively study something and gather information about it. Actually, I think it's been so for thousands of years.
So I could flip your own logic on you, you say "that's some kind of executable design control, not a test" wait, "a test is not a test just because you call it so."
DeleteWhat BDD does is what you describe, it objectively studies something and builds parameters around the expected results... so what is the BDD objectively studying if the code has not yet been implemented? The requirements.
Unlike thousands of years ago, the computational layer of a computer or program gives us instant feedback. This makes BDD or TDD very useful.
You define the test based on requirements - like so:
Requirement: Form field for user name should only take alpha characters from 5 characters to 15 characters in length
BDD:
Given a user at the form
When they enter a user name that has alpha characters within the range of 5 to 10 characters
Then the field accepts their input
Given a user at the form
When they enter a user name that has numeric values
Then the field will fail validation
Given a user at the form
When they enter a user name that has alpha characters less then 5 characters long
Then the field will fail validation
Given a user at the form
When they enter a user name that has alpha characters more then 15 characters long
Then the field will fail validation
...etc.
The developer builds his code, but perhaps omits the check for greater then 15 characters. It fails. He fixes the problem, and submits again. It passes.
Lets say it gets to a QA Eng, who finds he can submit special characters and non english characters. So he opens a bug ticket. He/she also now writes a BDD test along with automation to cover the case they just found:
Given a user at the form
When they input a user name with
Then the form field will fail
|characters|
|~!@#$%|
etc.
This is obviously a test, and your comment is just trying to obfuscate my points in order to "win" an argument. If this was so wrong, Google/facebook/eHarmony wouldn't have adopted similar approaches to agile design. It's a tried and tested approach that works very well for agile corps.