Imagining a World without Language Barriers: A Conversation on the Future of AI with Unbabel’s Vasco Pedro
Vasco Pedro has always been fascinated with language and the window it provides to how we process information. His undergrad studies focused on artificial intelligence and computational linguistics, and then he went to earn his master’s and PhD in natural language processing at Carnegie Mellon. Throughout his education, Vasco explored the fundamentals of how we think, how consciousness arises, and the core AI aspects of language.
Ten years later, Vasco, along with co-founder João Graça, merged these interests in the founding of Unbabel, an AI-powered translation platform. “Solving translation was the original reason AI was invented,” Vasco says. “Graça and I were frustrated that technology had made this huge promise to solve machine translation, but was still very far away from realizing that goal.”
It was during a surf trip that the would-be partners started to articulate their mission. “We knew there had to be a better way, and we saw that the translation space was ripe for disruption,” Vasco recalls. “The biggest question we were trying to answer was what the world would look like if there were no language barriers. What would the world GDP be?” The company’s mission became ensuring that all businesses can communicate seamlessly in any language, and AI would be a big part of the solution.
Under the Hood at Unbabel: Layers of AI
“We decided to combine the speed and ease of machine translation with the quality of human translation,” says Vasco. “But, in the beginning it was more of a transactional model that involved an order form, document uploads, and per-piece payment.” The team soon realized, however, that translation was often a recurring need because it was tied to ongoing conversations between a company’s customer service, sales, and marketing functions and the company’s customers. Based on this observation, they launched a subscription model that handled a variety of content types. The next iteration of the service refined Unbabel’s focus to the conversational layer of customer service.
“There’s a huge challenge when you scale your international customer support,” says Vasco. “You need to hire people who speak those languages. You need to start having offices in different countries to support those languages. It becomes fairly complex.” Unbabel addresses this challenge and complexity by “decoupling language from skill set” so that companies can hire people based on their product knowledge, centralize the customer service team, and more easily optimize resource management.
Behind the scenes, the Unbabel team thinks about AI as a collection of functions that work together to deliver the seamless translation experience that the company’s clients depend on. The most obvious AI element is machine translation. “We have our own machine translation engines that learn continuously with each translation,” explains Vasco. “The goal is to provide our Unbabelers with the best possible first translation.”
The other layers of AI in the Unbabel platform include a quality estimation module – a neural-based engine that estimates how good a translation is and whether or not it needs additional work by a human translator. A routing element then helps ensure that any human translation tasks are delivered to the most appropriate resource. “The routing piece identifies the ideal users within our community of 45,000 Unbabelers based on domain expertise and skill set,” Vasco explains. “And, then it makes those correlations and assignments on the fly.”
“The final AI element is what we call Smartcheck, which is a human augmentation element,” says Vasco. “This piece continuously scans the text and provides suggestions and corrections to the human editors. Smartcheck works as a kind of companion AI for the human translator.”
By augmenting human efforts with AI, Unbabel is able to provide valuable scalability and speed. “A typical human translator can process approximately 300 to 400 words per hour without Unbabel,” Vasco says. “Working in Unbabel, that same human translator can process approximately 1,200 to 1,400 words per hour – a significant increase in speed.”
This kind of combination of human and machine intelligence is very likely the most fertile space for further, practical development of AI; but we’re still a ways off from bringing that kind of reality into the mainstream. To get some context, we need to look at where we really are on the continuum of AI development.
The AI Continuum
“We’re still in a fairly primitive AI state,” says Vasco. “There’s a lot of stuff we don’t know about intelligence in general. We don’t understand how human intelligence works, which makes it hard to replicate. We’re really good at doing anything that can be cast as a classification task, but when it comes to reasoning – symbolic reasoning, reasoning in general, and general learning – there’s a lot of stuff that’s stuck in the 80s. We’re still about four or five great insights away from having something that would be truly autonomous.”
So, we’re nowhere near Skynet or The Matrix yet, but how far have we come? AI research and its practice date back to the 60s. At that time, people expected that within ten years we’d have a system that could speak and translate like a human. Things didn’t quite turn out as planned of course, and what Vasco refers to as the first “winter of AI” set in.
That first winter was followed by several other setbacks including the 80s resurgence with symbolic AI and expert systems thinking (a concept that predicted AI would replace doctors and other specialists) and the statistical machine translation wave of the 90s (which featured a lot of hype, but also underperformed against expectations). Today, neural is the latest addition to the AI landscape, and – like its predecessors – is under a lot of pressure to solve everything.
Part of what makes AI such a challenging area is the fickle nature of humans. “As humans, we’re very easily swayed by something that appears human,” explains Vasco. “But, at the same time, we’re also very easily frustrated by something that appears to be human, but really isn’t.” In other words, humans have a low tolerance for AI that doesn’t immediately and completely live up to their expectations.
Vasco cites Google Home and Amazon’s Alexa as examples of AI products that highlight the current constraints of the technology. “Google and Amazon are pouring a ton of money into these products,” Vasco points out. “They are doing some really cool things, but if you interact with either, you get an immediate sense of the limitations. If it was possible to go beyond what they’re doing, they would, because that would definitely be in their best interests.”
What Vasco finds even more exciting than AI-powered personal assistants is the progress made by innovators working on the idea of merging human and machine capabilities. “I’m a big believer in what Elon Musk and Ray Kurzweil are saying, the we will become the AI,” Vasco explains. “The real challenge is in the interface to our neocortex. Elon Musk just launched the neural lace challenge, and there are a few other projects going on as well.”
Vasco already sees this merging of human and machine in very everyday realities that we take for granted. “Phone numbers are already not in your brain. They’re in your phone,” he says. “Likewise, birthdays are in Facebook and directions are in GPS apps. It’s like Kurzweil has been saying – the singularity is us merging with machines and becoming highly augmented through that.”
“But,” Vasco says, “We’re far away from that. People associate AI with HAL from Space Odyssey: 2001, but it’s still something much less advanced.”
Human Challenges to AI Technology
Coming back to the present-day state of AI, Vasco acknowledges that there are some challenges in the market, but identifies the biggest of those as having more to do with the human mindset than with technological roadblocks. “AI is a word that captivates the imagination,” Vasco says. “So, everybody uses it. It’s like ‘big data’ was a while ago, and is often misapplied. It becomes a word with very little weight because everybody seems to be doing it, but no one is really solving it.”
Despite this issue of people claiming to be engaged in AI even though they don’t fully understand what that means, there are many companies doing solid work in the AI arena. “We’re seeing real impact in customer service,” says Vasco. “Companies like DigitalGenius are tackling customer service by automating certain tasks. I see their work as more of a ‘tier one’ approach in which you have AI as a first layer and then an intelligent way of upgrading or scaling that for humans.”
Vasco sees other inroads being made by companies that are applying AI to lead qualification processes and generative adversarial networks (GANs), but he has also observed an interesting phenomenon in which people stop thinking about AI as AI once a certain use case is working. “It seems that as soon as AI is successful, it stops being AI and becomes just technology,” Vasco says. “Things that, ten years ago, were definitely considered AI became ‘search’ or ‘machine translation’ or something else. We have a tendency to move the goal line for AI, making it seemingly unattainable. We may never be really happy with AI until it’s a conscious machine that we perceive as human-like.”
Four Tips for Startups with Big AI Dreams
Applying AI in the startup world comes with its own set of hurdles, but Vasco has four tips for anyone considering the possibility. The first has to do with being able to see a great idea through to fruition.
“When you look at startups, the challenge is that there are a lot of people with a great idea and the skill set to do it, but they end up using their initial resources to build infrastructure and all the other things needed to launch a startup,” Vasco says. “Take Unbabel, we had a great thesis on how to use AI to solve the translation problem, but it was almost two years before we were able to start doing anything related to AI. A lot of companies have that same problem. They’re talking the right way and focused on solving a specific problem with AI, but then it’s hard to say if they really have the resources to put into evolving that field.”
Vasco’s second tip is to hire the right people. “Get someone on your team who really knows what they’re doing,” he says, not mincing words. “There’s a little bit of hype right now around AI and machine learning, and it’s easy to be fooled, especially if you don’t have personal expertise. But, it’s imperative that the AI person on your team really knows their stuff.”
Vasco also recommends giving yourself breathing room. “You’ve got to allow more time for experimentation,” he says. “You have two options. Either you’re doing an off-the-shelf kind of thing with a known problem (like classification), in which case it’s best to stick with proven techniques that will give you an immediate boost and let you capture the low-hanging fruit. Or, you need to give yourself more time for experimentation so you can come up with something really novel.”
Finally, Vasco strongly suggests that you temper your expectations. “AI is great for helping humans, but – with a few notable exceptions – if you use it by itself and have high expectations of the output, you’re going to be disappointed,” he says. “While I believe in autonomous vehicles and think that we’re making great progress with machine translation, we’re still far away from AI being a magic bullet that we add to any product to make it better. You have to be cautious and realistic about expectations — your own, and your customers’.”
How paying attention to intangible elements of both your career and customer experience can make all the difference.
We asked leaders in the OpenView network to share their best advice for making these important meetings both enjoyable and productive.