These are the essential AI terms engineers need to know

Artificial intelligence (AI) seems to be everywhere these days, or at least the phrase is. Along with it, you’ve probably come across a number of other vague-but-impressive-sounding phrases related to AI: machine learning, deep learning, generative AI and a whole host of others. As with any new technology, it can be easy to become lost in the jargon, especially when it’s coming from outside your area of expertise. That’s why we created this article: to help working engineers come to grips with a technology they’re likely to encounter more and more in the coming years.


What is artificial intelligence?

This seemingly simple question belies a whole history of controversy and debate in philosophy, psychology and computer science, not to mention the added obfuscation that comes when a technical term is appropriated for marketing and fundraising. Setting science fiction and hyperbole aside, the most relevant definition of AI for engineers is that it’s any computer program which uses machine learning (ML) algorithms. This is in contrast with programs that are strictly rules-based, such as the chatbots found on many websites.


What is machine learning?

Machine learning algorithms identify patterns in data through a process of trial and error, gauging their success on goals provided by their programmers. These patterns can then be used to make predictions or guide decision-making. Data is the fuel for machine learning, and the more quality data an ML algorithm has, the better it will be at achieving its goals. ‘Quality data’ in this instance means data that has been collected, cleaned, preprocessed and standardised, in some cases with certain features or variables highlighted for their importance.

There are various types of machine learning algorithms, including supervised learning, where the algorithm is trained on labelled data such that each example is associated with a target output, unsupervised learning, where the algorithm is trained on unlabeled data and must identify the structure on its own, and reinforcement learning, where the algorithm receives feedback in the form of rewards and penalties. Of these three, reinforcement learning is the most likely type for working engineers to come across, particularly in robotics and autonomous vehicle development.


What is deep learning?

An artificial neural network (ANN) is a specific type of machine learning algorithm that’s based on the structure and function of biological brains. ANNs consist of interconnected nodes (neurons) that are organized into layers, with neurons taking inputs from the layers below them and sending outputs to the layers above. ANNs improve by adjusting the strength of their connections (weights) based on training data, a process known as backpropagation

Deep learning refers to ANNs with multiple “hidden” layers that effectively function as black boxes from the user’s perspective. The addition of multiple layers makes deep learning networks much more complex and better able to identify patterns in large datasets. As a result, deep learning has seen considerable success in computer vision, speech recognition and natural language processing applications.

Recurrent neural networks (RNNs) are a type of ANN designed to handle sequential data by maintaining an internal memory state. RNNs are contrasted with feedforward networks, which process inputs independently of each other. A deep RNN is a type of deep learning that stacks multiple layers of recurrent units, enabling models to learn increasingly abstract and complex representations of sequential data. These can be especially useful in predictive maintenance applications, where the time series data produced by manufacturing equipment can be used to train RNNs for anomaly detection and remaining useful life (RUL) estimations.


What is generative AI?

One of the most likely AI terms for no-code engineers to come across in their working lives is generative artificial intelligence (GenAI). Another subset of machine learning, GenAI is designed to create new data samples that are similar to its training data. This is accomplished through various deep learning techniques, such as Generative Adversarial Networks (GANs), in which a generator algorithm tries to imitate the training data and a discriminator algorithm tries to distinguish between the generator’s outputs at the training data, and Variational Autoencoders (VAEs), in which algorithms learn to take complex, high-dimensional data and encode it into in a simpler, lower-dimensional latent space, somewhat akin to the process of compressing a high-resolution image.

OpenAI’s ChatGPT, based on the company’s Generative Pre-trained Transformer (GPT) architecture, is the most well-known example of GenAI but there are a growing number of businesses incorporating GenAI into their products and workflows. Oracle recently added a new service just for generative AI and Autodesk seems just as bullish on the technology. On the practical side, GenAI is being used to design electronic circuits and it’s changing the way engineers at Toyota design new vehicles.

Suffice it to say there are plenty of ways engineers can use GenAI, so if you want to learn more about artificial intelligence and how it applies to your job, this is the place to start.


More AI terms to sound smart at meetings

While these aren’t as likely to come up as the terminology discussed above, it never hurts to have a few deep cuts to show off your knowledge.

Markov chains are algorithms that model the transition of a system from one state to another based on probabilities. While not typically considered machine learning per se, Markov chains can be incorporated into machine learning models or mimic the functionality of ANNs with less computing power. For this reason, the predictive text function on smartphones is often based on a Markov chain.

Random forest algorithms are a type of machine learning used for classification and regression tasks made up of individual decision trees. These recursively divide the input space into feature-based regions to make predictions, with each tree seeing only a random subsample of the training data to ensure diversity. In a classification task, the trees vote individually, with the final prediction determined by the majority. In a regression task, the final prediction is typically the average of the individual trees’ predictions.

A needle-in-the-haystack problem refers to the challenge of identifying rare or anomalous occurrences within a large dataset, such as searching for prime numbers. Addressing these problems generally involves some combination of feature engineering, data sampling and ensemble learning techniques. Such problems are particularly likely to occur when dealing with spare or imbalanced data.

Evolutionary algorithms mimic the process of natural selection through mutation, recombination and survival of the fittest. With each generation, the solutions closest to the goal (the fittest) survive and reproduce, producing the next generation. This process continues iteratively until some termination criterion (such as a set number of generations) is met.

Transformers are a recent type of deep learning architecture based on a self-attention mechanism, which allows models to weigh the importance of different parts of their inputs when producing their outputs. Transformers work best on sequence-to-sequence tasks, where the inputs and outputs are both a series of tokens (single units of text segmented into meaningful components). Common applications include machine translation, text summarization and language modeling.


Essential AI terms for engineers

The terminology covered in this article should encompass the majority of interactions engineers are likely to have with artificial intelligence in the near future. Given the pace of advancement, you’ll no doubt need to learn many more AI concepts over the course of their careers. Check back on this article for future updates.