Denver, Colorado
303-478-5763

AP Automation Testing

By Matthew Albert

Time For the Jeopardy

Image Icon The Associated Press

We don’t know about you, but we like game shows. Game shows are fun. Most of you who read this blog may be familiar with a guy by the name of James Holzhauer. No? How about Brad Rutter or Ken Jennings? Yes, we’re talking about the Jeopardy! Triumvirate of Almighty Knowledge. Do yourself a favor and watch their epic clash from January 2020. It’s worth your time even if you don’t watch the show.

Speaking of Jeopardy! gurus (R.I.P. to the spiciest of all memelords, Alex Trebek), we submit another name for your consideration–Roger Craig. Not the pro football running back, the Jeopardy! player. He held a whole bunch of single-day records until Holzhauer came along and wrecked his gaudy numbers. The way he got to that level of ridiculous, though, deserves a closer look.

Craig watched the show regularly and had a hunch that he was stronger in some categories than others. (Not really a bombshell revelation because most people who watch the show would feel the same way about their own performance.) However, he wanted to know exactly how much weaker his gaps were. That’s where the story gets good.

Craig developed a series of data-mining and text-clustering algorithms to chart which clues came up more often than others. He used an archive of past Jeopardy! clues to increase his sample size. Then, he used memory training software to help him learn those clues, and he focused even harder on the clues that gave him the most trouble. The preparation was meticulous. The software made it faster. In short, one scary combination that gave a man the ability to unleash the unholiest of hellfires upon the most unsuspecting trivia buffs from all corners of these vast United States. The lambs didn’t even know they were about to be slaughtered.

When Craig got on the show, he bet fearlessly on Daily Doubles because he had the breadth of knowledge covered. The category data for his games matched the trends of his own research. In his second game, the man won $77,000–a new one-day record that eclipsed Jennings’s old mark of $75,000. Craig won 6 games in his initial run and then blitzed through the Tournament of Champions. It wasn’t even close. To give you a sense of how good he was, think about this sequence: On Day 1 of the Tournament of Champions final, he found the first Daily Double in Double Jeopardy and bet all 9K of his winnings. He nailed that one and then went hunting for the second Daily Double. When he found it, he bet all 18K of his winnings and got it right again. The man quadrupled up in less than 2 minutes of game time. Ambitious, bodacious, outrageous. (By the way, he won the whole thing…duh)

Was he good enough to win on his own without preparation? Probably. The software-aided preparation though? It made him unbeatable.

The Lesson and the Decision

In other words, Roger Craig decided that this prep was worth his time and effort given his circumstances. The risk was high, but so was the payoff. This risk-reward calculus is at the heart of most companies’ decision-making when it comes to AP Automation Testing. When a company aspires to the gold standard of DevOps (software development and IT operations combined), they have a critical decision to make about how much automation should be implemented into their testing. As always, there are factors that must be considered first.

What To Consider

Before you unequivocally demand to automate all of your testing, consider these questions:

  • Is it cost-effective? If automation helps you provide high quality with lower expenses, you go for it. If not, you have a tougher decision.
  • Are your software tests overly repetitive? Repetitive tests by humans means lost time and potential inconsistencies. Automated testing saves you the time and lets your team focus on more productive tasks.
  • Will any time be saved? If you’re not saving time, it’s not worth moving to automation.
  • Can you run multiple tests at once? If you need to run a high number of tests at once, your human employees can only do so much (even your best ones). Automation can still do significantly more runs than people can.

What To Watch Out For During Testing

So let’s say you’ve considered the above questions and made the decision to automate your testing fully. Let’s think about where companies go wrong so that you can avoid these pitfalls:

  • Drawing conclusions that aren’t data-based. Making decisions on hunches without the data to back them up means bad decisions get made over and over again.
  • Developing software that isn’t customer-friendly. Congratulations, you’re the first contestant to solve a problem with your new development. But what good is speed if you develop something that no one wants to use because it takes forever to learn?
  • Not preparing for hypothetical issues. When you’re developing any plan (not just software testing), you want to stress-test that plan against anything that could possibly go wrong. Skipping this step is often regrettable.

Big Goals To Focus On During Testing

  • Establish clear goals for your release. Set goals that all of your team members can understand with simple articulation.
  • Don’t go for style over substance. Flashy is fun, but it isn’t a guaranteed winner. Substance usually has a better track record.
  • Quality over speed. Remember, you want the software to work. You want the software to be usable. You DON’T want the software to be quickly chucked in the bin.

We realize the above questions and considerations are more general. However, the point we’re trying to make is that only YOU can know the true risk-reward calculus for your own business. Make a decision based on good data, put your all into it, and you’ll sleep better at night over a big debate like this one.

January 13, 2021