Automation is not the panacea I thought it was.
Implemented incorrectly, an automation system will waste more of your time than it saves. Sure, you won't have to run through the same checklists over and over, but your time will instead be spent tracking down a disproportionately large number of false positives. At first, I thought: "Hey, at least it's code! It's got to be more fun than just using the product, right?" Now I'm not so sure.
In robotics and computer graphics, there's a problem known as the "
uncanny valley": when you get closer to modeling realistic human behavior, the results can be unsettling. I propose that there's a similar problem with automation: the closer you get to trying to simulate human behavior, the more problematic the results. (A bit of a stretch, and due to an entirely different set of problems, but bear with me).
Humans are good at dealing with the abstract. Take, for instance, the hard to read text you're supposed to identify when signing up for something (it's called a "
CAPTCHA"). Reading these is (usually) easy for a human; we look at the blurry distorted mess and see letters. This is something we are good at. Writing a program that can read those things, on the other hand, is a difficult computer science problem.
Conversely, say you had to create dozens of accounts somewhere (perhaps as part of testing an online service). Completely filling out the name, address, interests, secret question, etc, is boring, repetitive, and prone to mistakes. These sort of tasks are not our forte. However, it's easy to write a script that fills in all of the fields for you. (Especially with a web page, since you can interact directly with the
DOM).
This example illustrates the balance between manual and automatic tasks. Humans are good at abstract reasoning and big picture. Computers are good at repetition and precision (note I didn't say accuracy :-P)
Human users have no problem with slight changes in design or layout (and sometimes they won't even notice); computers tend to go apeshit. This is what causes so many of the automation failures you'll be tracking down. Maybe the browser started minimized, or another window popped up in front of it and stole focus (automatic updates, anyone?). Maybe the page took a few seconds longer to load, and the expected items weren't there when the script checked for them. Maybe the designer moved a button or changed a label. Or, maybe your test itself was wrong! There may be a legitimate outcome that you, author of the test, didn't think of. (When tests are code, they can have bugs too!)
Where computers shine are precisely defined problems. This suits them well for testing behind the scenes, where the input and output is not so abstract. If I send this packet, does the server give back that response? If I call this function with this value, do I get back that answer? What makes this sort of testing and verification so boring (and difficult) for humans is precisely what makes it so perfect for computers.
Another area where automation shines is tools. Running a series of installers and patches...opening up dozens of web browsers to a series of long, convoluted URLs...finding all the logs from a certain period of time, spread over dozens of computers, compressing them, copying them to a central repository, and emailing all interested parties about their availability...these are the sort of mindless, error prone tasks that waste tester time and are just begging to be automated.
The take away from this is that automated tests are most useful when they augment human testing, not try to replace it. Automation should simplify the lives of human testers -- by taking over the tasks humans are inherently bad at -- so they can focus on what they do well: finding problems in the user experience.