HP Tech@Work
Today's trends for tomorrow's business
Machines Can Learn to Think Like Us

Machines Can Learn to Think Like Us

Imagining a meaningful chat with your PC?

It’s pop quiz time. Can a computer answer this question correctly?
A girl is going across a set of monkey bars. She...
a) jumps up across the monkey bars.
b) struggles onto the monkey bars to grab her head.
c) gets to the end and stands on a wooden plank.
d) jumps up and does a back flip.
Up until very recently, the computers that answered this question and others, designed by AI researchers in Seattle, answered “C” around 60 percent of the time. That’s a pretty good score for machines (but not for people, who got the correct answer 88 percent of the time).
Things changed this November when Google unveiled BERT (short for Bidirectional Encoder Representations from Transformers), which was able to answer the quiz questions about as well as a person can.
BERT is the latest success in the field of natural language processing (known as NLP for short), which we already experience in our day-to-day life with things like chatbots and auto-generated keyword tabs.
In BERT’s case, it read all of Wikipedia and used that context, combined with a bidirectional deep neural network, to create a machine that can analyze language-based problems nearly as well as our brains.
Hopefully, as NLP advances, we will eventually be able to have a proper conversation with a machine that doesn’t involve prompts and keywords (sorry, Alexa and Siri).
In the meantime, there are other ways that researchers are using machine learning to analyze language as well:

Natural language understanding

Considered a subset of NLP, this focuses on machine reading comprehension, analyzing large amounts of text by plucking out concepts, drawing parallels between ideas, and perhaps most impressively, interpreting emotion.
Think of NLU as a prerequisite to NLP: You can’t process something unless you understand it first. It’s already proving to be useful in a number of data-rich fields like transportation, education and medicine.

Real-time machine learning

Up until this point, much of machine learning has been dedicated to studying a large, static data set. But neural networks can also analyze an incoming flow of data in real time. The most immediate area this will impact is real-time translation, allowing business colleagues around the globe to meet and converse in multiple languages without awkward phrasing and miscommunication.

Natural language generation

This is the process of turning data points into natural, conversational sentences that people can instantly comprehend.
Some large news organizations are already using this tech to write excerpts for areas that produce a large amount of daily data that humans don’t have time to write - say, little league baseball results, or performance summaries for every publicly traded stock in the Dow Jones. In the future (and perhaps a bit ominously), marketers will use NLG on customers’ own data points to create stories directly targeted at them.
Anyway, back to BERT. It’s still going to be a while before a machine can understand the nuances of James Joyce or be able to debate politics with us. But Google’s researchers (who have open-sourced their paper’s code and data on Github) and their NLP peers are proving that neural networks can learn the quirks of language better - and faster - than we ever expected.
When you’re reimagining our world, you need top-of-the-line power to do it. Optimal for simulation, 8K video editing and complex machine learning, the HP Z8 doesn’t disappoint.