Artificial Intelligence and Engineering

(Image courtesy of Gengiskanhg.)

Artificial Intelligence (AI) is escaping the realm of hackneyed sci-fi tropes and staking a renewed claim as the forefront of technological progress. Ever since the field of AI was founded in 1956, it’s waxed and waned in the public eye, perceived at some times as the inevitable future of computing, and at others as the broken promise of scientists who dreamed too big. Today, expectations for AI are sky high.

AI refers to systems that act intelligently, whether in a specific domain (narrow AI), or in general (strong AI). Designing such systems is no easy task. The human brain, consisting of about 86 billion neurons, has been postulated to be the most complex object in the known universe; naturally, recreating even a portion of that complexity has proven to be challenging.

Does this mean the current interest in AI is just the latest in a series of hype cycles? Perhaps not. Consider these recent AI accomplishments: in 2011, IBM’s question-answering Watson program bested Jeopardy! champion Ken Jennings in the popular quiz gameshow; in 2016, Google DeepMind’s AlphaGo program achieved a long-sought AI milestone when it beat Go expert Lee Seedol.

Though these examples may seem frivolous, they illustrate an important point–artificial intelligence, however rudimentary, is being fostered by increasingly powerful computing abilities. This has resulted in a growing number of narrow AIs that exceed human abilities.

For more accessible examples of AI progress, look no further than your smartphone. Our personal devices are getting much better at tasks such as natural language processing (like Siri), image recognition (like face detection) and data analytics (like how Google seems to know more about you than you do).

As engineers, it’s important to stay informed of technological innovations, especially given their potential impact on our profession. AI is one such technology that’s becoming increasingly relevant across many disciplines, including engineering. So, here’s a look at some popular AI techniques and applications, and how AI might fundamentally change the entire engineering profession.


Machine Learning

One of the most fruitful avenues of AI research is machine learning. This refers to algorithms that, through a set of training data, allow computer programs to learn to do something for which they were not explicitly programmed. For example, one might expose an algorithm to images of both dogs and cats, with the hope that the program would learn to differentiate the two.

A neural net with a single hidden layer. The arrows represent output from one neuron taken as input for another. (Image courtesy of Colin M.L. Burnett.)
Some of the most effective methods of machine learning are based on the concept of artificial neural networks (ANNs), which have been studied on-and-off since the beginning of AI research. ANNs are modelled after the neurons in the human brain, and consist of a network of nodes (analogous to neurons) connected with varying degrees of correlation (analogous to synapses).

One of the earliest methods for training ANNs was the perceptron algorithm. Perceptron teaches a single-layer network to sort a given input into one of two classes, provided the classes are linearly separable (i.e., you can separate the data with a line, or plane, or hyperplane, etc.). This algorithm works by inputting training data, comparing the nodes’ output with the expected output, and updating their weighting based on the difference.

Increasing the amount of training data modifies a perceptron algorithm’s understanding of the boundary between classes. (Image courtesy of Elizabeth Goodspeed.)
Despite its effectiveness at binary classification, the perceptron algorithm was too simple for most applications. However, you can increase the effectiveness of ANNs by adding more layers of nodes, and using a more powerful technique (such as backpropagation) to update their weights. Layers between the input and output neurons are called hidden layers and they’re used extensively for what’s called deep learning. Here’s a video that offers a simple introduction to the concept of deep learning: 


Artificial Intelligence Applications

Demonstration of a Tesla Model S driving without any human intervention. (Image courtesy of Tesla.)
Now let’s look at some of the engineering accomplishments of machine learning and AI techniques. There are several examples of specific domains in which AI has shown great promise, including:

 

  • Natural Language Processing (NLP): NLP is the field of improving human-machine communication, and has progressed quite far in tasks such as automatic language translation, speech recognition, sentiment analysis, handwriting recognition and question answering. A popular neural net architecture in NLP is called Long Short-Term Memory (LSTM), and it’s used by major tech players like Apple and Google in their NLP applications.
  • Image Processing: In 1966, the idea of human-like computer vision was considered simple enough to be solved over the course of a summer. The problem turned out to be much less tractable than that, and only today are we starting to see significant progress in sub-domains like facial recognition, environment mapping, and human emotion detection.
  • Disease Treatment: Just as the world’s best Go players are no longer human, the world’s best doctors may soon be made of silicon. With the ability to quickly comb through the latest medical research as well as a patient’s entire medical history, machine learning techniques can be used for a variety of medical applications. For example, IBM’s Jeopardy! champion Watson is being trained to help oncologists diagnose and treat lung cancer.
  • Autonomous Vehicles: Self-driving cars have been a long time in the making. Today, we’re finally at a point where much of the necessary technology, including advanced machine learning algorithms, is ready. With companies like Tesla, Google, and Uber already testing self-driving vehicles on the road, many of the biggest remaining barriers are legislative rather than technological.
  • Data Structuring: Technology startup Gamalon Inc. has used a deep-learning method called Bayesian Program Synthesis (BPS) to create highly efficient data structuring programs. The company’s algorithm takes in paragraphs of text from company documents and databases and structures it into clean rows of useable data. “We have one customer that, every year, spent nine months and four million dollars to structure and match their data,” said Gamalon CEO Ben Vigoda. “In contrast, Gamalon was able to perform the same task in minutes with twice the accuracy.”

Another application of AI and machine learning is one that’s poised to disrupt almost every industry imaginable: big data analytics.

Let’s take a step back from AI for a second, and talk about the Internet of Things, or IoT. In a nutshell, the IoT will make everything into a smart-thing. On a consumer level, think of smart clothing that can monitor your health. On an industrial level, think of factory lines that can communicate with each other while using an arsenal of sensors to collect constant data on the entire manufacturing process. Such a mass collection of data is appropriately referred to as big data.

Big data is the key to the promise of AI data analytics. And this is a lofty promise indeed, as the combination of factory automation, big data, and AI is predicted to harken what’s being called the fourth industrial revolution, aka Industry 4.0. With virtually unlimited access to factory data, the hope is that machine learning techniques will result in data insights that increase production efficiency.

Smart Factories in Industry 4.0 may be completely human-independent. (Image courtesy of PTC.)
You can take this premise of big data -enabled, AI-driven data analytics and apply it to almost any domain. Healthcare, City Planning, Traffic Management, Weather Prediction and Building Management are just some of areas in the midst of an AI/data transformation.

That’s not to say that big data is without its critics. Machine learning expert Michael Jordan offered a rather scathing critique of the hype surrounding big data, asserting that its promises lack a solid scientific foundation. He suggests that any real data insights will be surrounded by a haze of white noise, with no way to tell the difference. Or, as Jordan puts it: “…it’s like having billions of monkeys typing. One of them will write Shakespeare.”

Even worse, Jordan adds, is the possibility of dangerous suggestions emerging as data insights. “I like to use the analogy of building bridges. If I have no principles, and I build thousands of bridges without any actual science, lots of them will fall down, and great disasters will occur.”

While there may be a range of opinions on the future of AI, there’s no denying its present successes. Even though these successes may be in narrow domains, they’re all but guaranteed to cause massive disruptions. The clearest example of this is self-driving vehicles, which are certain to replace human drivers in the not-too-distant future. While resulting in much safer and more efficient roads, this will also have the effect of eliminating millions of jobs (approximately 1.7 million Americans and 253 thousand Canadians drive a truck for a living).

With that near-term shakeup in mind, it’s natural to wonder: how might AI affect the engineering profession as a whole?


AI's Impact on Engineering

According to civil engineer Tim Chapman, director of the Arup Infrastructure Design Group, AI will bring some big changes to the engineering profession. One of these changes will be the automation of many low-level engineering tasks. As Chapman explains, this may not be as beneficial as it first sounds: “Artificial Intelligence will render many of the simpler professional tasks redundant – potentially replacing entirely many of the tasks by which our younger engineers and other professionals learn the details of our trade.”

Outcomes like this may leave you wondering whether engineers, like trivia experts, board-game players and doctors, might face competition from their AI counterparts. Will we reach a point where human engineers, even those with the most professional experience, become redundant?

There isn’t a clear answer to that question. A Stanford University study on the impact of AI between now and the year 2030 concluded that, while AI will replace some jobs, engineers are probably safe: “AI is poised to replace people in certain kinds of jobs, such as in the driving of taxis and trucks. However, in many realms, AI will likely replace tasks rather than jobs in the near term.”

Similarly, the Partnership on AI, a research conglomerate consisting of Apple, Amazon, Facebook, Google, IBM, and Microsoft, is optimistic about the outcomes of AI while recognizing its challenges: “While [AI] advances promise to inject great value into the economy, they can also be the source of disruptions as new kinds of work are created and other types of work become less needed due to automation.”

AI optimists believe that, as with previous technological innovations, the jobs we lose to technology now will be replaced with futuristic new jobs. If this is the case, engineers may see dramatic changes to their job descriptions as AI advances–but at least they’ll have a job.

However, this may not be the case. Many well-respected engineers and scientists, including Elon Musk, Stephen Hawking, Sam Harris, Nick Bostrom, and many more, are concerned that AI may threaten not just our jobs, but our entire social structure–and possibly even our existence as a species. The quick doomsday argument is this: if we successfully create a strong AI, one at least as capable as a human in any imaginable task, it could quickly surpass us to the point where we would have no reasonable ability to control it.

Whether you’re eagerly awaiting the Singularity or gearing up for a hostile robot takeover, one thing is certain: AI will have a profound impact in the field of engineering.

How do you think the profession will change in the wake of AI advancements? Share your thoughts in the comments below.