Artificial Intelligence is More Than a Buzzword

If you’ve been on the internet lately, you’ve probably seen the term “artificial intelligence” or “machine learning” enough times to make your head spin. Heck, you’re reading another article on it now. So why continue when this topic is so saturated? In a word: hype. AI has potential applications for every industry, so people are understandably fired up over what we can do with it today and how we might be able to use it tomorrow. Here at Uptown Treehouse, for example, we use Google’s Tensorflow for Natural Langue Processing, Image recognition, and ROI predictions. And that’s just the tip of the iceberg when it comes to what AI has to offer marketers. It can be used to know how your customers feel about your brand in real time, to optimize digital advertising campaigns, to create highly personalized customer experiences, or even to create content like stock insights, hotel descriptions, or recaps of sporting events.

With so much AI-fever going around, it’s hard to decipher what’s real and what’s still science fiction. Just for fun (as well as our own edification), this blog explores the history of the hype and distills the practical technologies currently available.

The history of Artificial Intelligence can be segmented into 5 distinct periods.

Genesis and Hype Cycle I (1950 - 1982)

Can a computer imitate human intelligence? A seemingly innocent question proposed by Alan Turing that has sparked 68 years of intense interest and innovation. In 1951 (less than a year after Turing first posed the question) a graduate student, Mavin Minksy, created the first neural network out of vacuum tubes. His neural network successfully mimicked the behavior of a rat in a maze. 

Inspired by this work, Arthur Samuel created the first self-learning program to play checkers. By 1956, there was enough enthusiasm that Samuel organized a conference for the advancement of AI, which is where the term AI was coined. From 1956-1974 the field exploded. Leading experts were convinced that a human-level AI would exist by the year 2000. Turing himself believed that humans would have less than a 70% chance of distinguishing an AI from a human by the time the 21st century rolled around. These commonly held beliefs and the rise of supercomputers, like ILLIAC, directly influenced 2001: A Space Odyssey with the appearance of HAL 9000, an AI. HAL 9000 (and its legendary deadpan sassiness) undoubtedly symbolizes the pinnacle of the first hype cycle.

During this time, sub-fields and algorithms of AI emerged, including: Natural Language Processing, Speech to Text, Voice Recognition, Image Processing, and Computer Vision.

AI Winter (1983)

So why didn’t we have an AI like HAL 9000 in 2000? Basically, the initial pioneers were a little overzealous. There were two major factors that limited the advancement of AI in the beginning: a limited understanding of how the human brain performs complex cognitive functions, and computational processing power.

It became clear in the 1970s that although hardware could be used to imitate the human brain to perform tasks, it was not well understood which parts of the brain should be mimicked to solve a certain problem. In addition to a lack of knowledge, building a computer with the same computing power of a human brain would have cost more than the entire U.S. GDP in 1974. That might be a tough one to slide by the budget committee. 

Although promising, many applications developed in the first hype cycle didn’t live up to expectations and the U.S. government pulled funding. The attitude towards AI became dismal between 1974 -1982. Even though some significant discoveries were made, the lack of enthusiasm buried them, and some had to be later reinvented.

Resurgence (1984 – 2010)

In the following years, funding was restored, and research began picking up again. It became apparent that mathematical principles could be applied to improve AI algorithms. A spawn of new algorithms were developed consisting of supervised and unsupervised learning methods. Deep neural networks began to appear with impressive results. As icing on the cake, computing power was effectively doubling every two years, according to Moore’s Law, which helped solve a number of computational limits that the earlier pioneers faced. 

Hype Cycle II (2011 – 2017)

The resurgence restarted the hype train. People are once again predicting that human-level AI will be among us shortly. Elon Musk is so confident, he has founded a company to ease the integration between machines and humans safely via NeuralLink. Why is this considered another hype cycle if so many breakthroughs happened in the resurgence phase? Simple. The growth is not sustainable. Moore’s law is a manmade/upheld law; it’s only because major chip manufacturers (e.g., Intel) have decided to keep up with it. But now they’re hitting a wall. Incremental increases in investment towards AI research each year is also slowing.

 

Present Technologies and Research

Now that the history lesson is over, we’re better equipped to understand the current technologies, as well as their applications and prospects.

  IBM Watson

In 2006, Watson beat human players at Jeopardy. By 2011, it was predicting lung cancer in patients with 90% success rate as compared to 50% for human physicians. IBM Watson is also providing businesses with a way to analyze their data.      

Google DeepMind & Tensorflow

Most recently, in 2016, DeepMind beat the world champion at AlphaGO. DeepMind is powered by Google’s Tensorflow. Tensorflow is a Neural Network library and can be used for many tasks such as image recognition, natural language processing, and much more. 

Current Machine Learning Challenges

So, what are the questions that machine learning is attempting to answer? There are a bunch of competitions right now that are pressing the limits of AI. The best method to forecast future traffic to Wikipedia pages will win $25K in prize money, while the 2018 Data Science Bowl is offering $100K to the group that perfects how we can use AI to find the nuclei in divergent images to advance medical discovery. Curious if AI can detect fraudulent click traffic for mobile app ads or automatically identify the boundaries of the car in an image? Anyone who figures out either of those puzzles has $25K waiting for them. In case five or six figures doesn’t float the scientific community’s boat, some researchers might want to look into improving the accuracy of the Department of Homeland Security's threat recognition algorithms. If somebody does that job right, there is $1.5M sitting around with their name on it.

In short… Lots of applications. Lots of fields. Lots of possibilities.

In a Nutshell

Artificial intelligence’s potential is anything but artificial, a fact that becomes more apparent with each application. We may not have a human-level AI in the next couple of decades, but one thing is for sure: AI is here to stay. Marketers would be wise to keep up to speed on current capabilities and future innovations, because the next breakthrough could deliver the tool that your data team has been itching to add to their toolbox. Or it could be the foundation of Skynet, which would also be good to know.


Uptown Treehouse, Inc.

Uptown Treehouse, 1601 Vine Street, Los Angeles, CA, 90028