3 Things Computers Still Can’t Do (Yet)

The Research and Analytics team here at OpenView Labs is constantly collecting data of all shapes and sizes. These tasks range from in-depth analysis of a single function within a company, to broad-based web research across thousands of websites. To accommodate such a range of research projects, we have multiple options:

  1. We can do the secondary research ourselves
  2. We can outsource it to our offshore research teams in Romania and the Philippines
  3. We can write a script to collect it automatically

Faced with these three choices, we often have to make decisions about the division of labor between people and technology. As a result, we have come to learn the strengths and weaknesses of machines fairly well.

Wait, computers have weaknesses?

The immense power of machines is thoroughly documented in virtually every 80’s sci-fi action movie: Terminator, Robocop, Knight Rider, Blade Runner, and my personal favorite, Short Circuit. Because computer limitations don’t result in explosions, crime fighting, or global Armageddon, they’re far less heralded. But they do exist. Computers are not simply a panacea that can do everything better, faster, and cheaper than a human being.

3 Things Machines Still Suck At

Here are the three issues that I’ve struggled with when trying to train a computer to do human-like analysis on the Internet:

1) Robots stink at asking questions

A program will raise an exception when it can’t handle a given command, but only when the problem is in its list of neatly defined errors. Unexpected problems that don’t fit this list are simply ignored, to the detriment of its ultimate output.

For instance, a script can find a phone number on a web page, but if it unexpectedly finds multiple phone numbers, it won’t know to stop and ask you which one it should be collecting. It doesn’t have the experience or intuition to step outside of its narrowly defined objective, like one of our researchers would if they realized what they were doing simply didn’t make sense.

Before you’re relatively certain that there won’t be any unexpected problems in the task you’re asking a program to do, it’s best to leave the task to a person.

2) A computer doesn’t intuitively know what’s important

Any adult human being can read a well-designed webpage and instinctively know what’s meant to be important. The author has provided us clues in the form of location, color, contrast, and font so that we instantly know what to focus on. A computer comes with no such function, and it’s actually remarkably difficult to install one. It’s great at returning you every feature X from a website, but terrible at returning you the most important feature X.

3) Computers have no sense of style

When considering the merit of a website or the validity of its content, how well it’s designed can be an important clue. Even without any formal training in UX, we all know the difference between a crappy 90’s website and a professionally made one. A nice website just makes you feel better and breathe easier.

Unfortunately, robots neither feel nor breathe, so they’re pretty terrible at replicating what for humans is an incredibly easy task. That means they can be duped by low-quality information like content farms, blogs, and other bots.

Learning from Technology

Of course, none of these problems are insurmountable. If you can teach a computer to recognize a human face (and you can), I have a hard time believing that you can’t teach it to tell the difference between a good and bad website. It’s just that those type of soft skills don’t come standard for computers in the same way that speed, obedience, and a relentless work ethic do.

So maybe someday, with a lot of work, I’ll teach a computer to ask poignant questions about the task I’ve asked it to do, implicitly prioritize its own output, and learn what a modern website looks like. But until then, I’ll keep an eye out for them and delegate those tasks to a living, breathing person.

Nick Petri
Nick Petri
Behavioral Data Analyst

Nick is a Behavioral Data Analyst at Betterment. Previously he analyzed OpenView portfolio companies and their target markets to help them focus on opportunities for profitable growth.
You might also like ...
Tech Trends
The State of Cloud Marketplaces: A Look into the Data, Trends, and Findings for Software Sellers
Just like over the last decade, and even more pointedly in the last two years where we’ve seen most purchases...
by John Jahnke
VC Insights
Investors Predict What’s Next In Identity and Access Management

97% of security executives plan to expand or continue existing spend on identity and access management tools in 2021. 

by Kaitlyn Henry, Eliza Loring
An Overview of the Growing "SaaS Enabled Marketplace" Ecosystem
Editor's note: This post originally appeared on Medium. You can read it here. As a VC firm focused on both...
by Clement Vouillon