Algorithms will never replace humans at software testing, says House of Test’s Managing Director Ilari Henrik Aegerter. We need the adaptive and curious human mind for the hardest cases. This is why Ilari doesn’t believe in tester certifications, drive a Tesla, or strive to live forever. He does believe in people’s desire to do a good job, though.
House of Test calls themselves rebels. The software consulting company’s rebellion is against formalistic testing schemes with endless reports and documents. “Any time you spend creating reports is time away from testing a product”, Ilari points out. House of Test calls themselves revolutionaries; the team pushes against the trend by emphasizing human agency over automation.
This is what excites Ilari about software testing: the field combines engineering, psychology and sociology. Machines and processes are great tools for achieving some goals faster and more accurately, but the tools will never be able to solve the toughest problems on their own. The most difficult technical problems are the specialty of Ilari’s company.
“If your software could potentially kill people, you have to test it”
Companies often see software testing as an additional cost to be avoided: if engineers just worked carefully, it wouldn’t even be needed. This is fine, if your product is about showing cat pictures, Ilari grants. Everyone else should ask: can my software kill people? If the answer is yes, you should test it for undesirable effects.
All software tests serve to answer two questions: does the program do what it’s supposed to do, and does it do something undesirable. The second is the tricky one, as there are infinite unintended things a software could do. Self-driving cars are a good example: autonomous vehicles are faced with countless real-life situations when navigating traffic, and no company has yet built a 100% safe solution. Ilari says he doesn’t drive a Tesla, because he finds Tesla’s autopilot particularly ruthless. The company tests their products adequately to prevent the worst outcomes, Ilari observes.
Exploring the infinite ways for things to go wrong is where humans have the upper hand, Ilari remarks: we can answer questions we’ve never even heard before and behave reasonably in entirely new situations. Answer quickly: can you fold a watermelon?* An algorithmic test automation won’t know what to make of this question, because an automation can only process queries with a predetermined answer. Humans might find the question bizarre, yet they’ll immediately know that the answer is no.
Ilari doesn’t spare his scorn when talking about the prominent software testing certifications, which are built as a series of standardized steps for a tester to follow. These five-letter certifications are just a money-making scheme aimed at corporate HR departments checking boxes, Ilari declares. He even put up a satiric certification site, IQSTD.org, which is about as useful as the official ones, but funnier.
"We trust people to make good decisions and learn"
If certifications aren’t the answer, how does one recognize a good software tester? House of Test receives a ton of applications every time they publish an open position, and only 0.5% of applicants get hired. As the company specializes in the hardest technical problems, all their staff have a solid software engineering background. Beyond these hard skills, Ilari looks for curiosity. What have the candidates learned after graduation? Do they have a wide variety of interests? A good tester will need to learn new things often, so a passion for learning is crucial.
Another soft skill House of Test screens for is a sense of humor. When a tester finds a bug in a client’s software, they are basically asked to tell a developer that their baby is ugly. Good humor and some people skills go a long way in softening the blow, Ilary explains.
People’s brains are the most important capital of a company, Ilari declares. These brains need to learn more; to read books, visit conferences, attend courses. Large organizations often have cumbersome approval processes for all further education, while House of Test gives every employee an annual education budget of CHF 10,000 and full control of it. In nine years, Ilari has only had to intervene once.
This is part of Ilari’s work philosophy: if you trust people, they’ll act in good faith. If you treat them like babies, they’ll behave like babies. Some companies expect employees to use any opportunity to leech off their employer, and these organizations will optimize their structures around preventing this. Others believe that people are generally good, with few exceptions, and these companies optimize themselves for people who want to do great work. Ilari left a career at eBay to get away from the corporate mindset and to build a healthier professional environment in House of Test.
"The finiteness of human lives are a feature, not a bug"
The leader of these liberated rebels calls himself a cyborg-friendly humanist. The moniker describes Ilari’s love for the merger of hard and soft sciences, of technological progress and the ever-constant yet elusive human condition. It’s also about his love for language: programming languages are built to be very deterministic, while human languages are rich with connotations, irony and humor. It all comes together in the field of software testing.
‘Cyborg-friendly’ also means being open to physical augmentations. Ilari would love to get an eye implant that displays people’s names. He draws a line at uploading his consciousness, though. Ilari doesn’t want to live forever; the finiteness of human lives are a feature, not a bug. If we lived forever, we wouldn’t have motivation to do anything, Ilari believes. After all, why do something today, if I have a million more years to get to it? An infinite human mind would grow complacent and lose its curiosity.
With his finite mind, Ilari has the drive to change what corporations see as good testing. Ilari would also pick up the phone if Elon Musk asked him to help test his autonomous cars. Or even better, SpaceX rockets.
*The watermelon question comes from Harry Collins’s book “Artifictional Intelligence”.
This article is partially based on this podcast episode.
Comentários