AI from Enterprise In Space to Take Over the World…of Education

The world is quickly starting to resemble science fiction, with new technologies continuously being developed to take us into that sci-fi future described by minds like Gene Roddenberry, Isaac Asimov and Arthur C. Clarke. Self-driving autos are hitting the road, satellites are looking for gold (or platinum) in the hills of nearby asteroids and an augmented reality game about fictional creatures is the only way to get kids (and adults) to go to the park.

Whether or not the sci-fi vision that we come to realize is a utopia or a dystopia may partially be determined by whether or not this technology is used to improve the quality of life for humanity or just to serve a handful of people.

When it comes to artificial intelligence (AI), Value Spring Technology (VST) wants to make the world a better place, which is why the firm is developing Ali, an AI tutor that, when fully developed, will help students of all backgrounds and ages learn any topic imaginable using natural language. Unlike almost every other AI yet made public, VST's enterpriseMind platform, on which Ali is based, is capable of understanding meaning in language and rewriting its own programming, perhaps making it the only “true” AI out there.

To demonstrate the potential of Ali and enterpriseMind, Ali's first big project will be aiding students working with the Enterprise In Space (EIS) program, which aims to send a 3D-printed spacecraft into Earth orbit carrying over 100 experiments from student teams worldwide.

In an interview with ENGINEERING.com, the VST team described just how Ali works, how she differs from existing AI software and how she’ll be used to educate students from kindergarten all the way through graduate school and beyond.

How enterpriseMind Works

In 1984, William Doyle, founder of VST and the inventor of the enterpriseMind platform, patented his first AI software, modeled on how the human mind operates. The program, which relied on a “meaning engine” capable of deciphering the meaning of words from unstructured text, could be used to generate software for the fields of data engineering and insurance underwriting. Data related to how an insurance firm, like AIG, might underwrite policies could be fed into the AI, which would then write a program utilizing that data for the firm's operations.

Thirty years later, this AI would lay the foundation for Doyle's enterpriseMind platform and VST, which is made up of seasoned experts, including engineers from IBM like Doyle himself. Since those early days, the platform has evolved tremendously, ultimately forming the cognitive computing technology that hooks up to IBM Watson. Now that Watson has developed the capability to recognize human language, VST is implementing a natural language interface for operating AI running enterpriseMind.

Understanding how Doyle's AI works is more akin to a psychology lesson than it is to a computer science lesson. This is because enterpriseMind relies on actual principles of human cognition.

As Doyle explained, “In psychology, there's a model of memory that includes ‘procedural memory’ and ‘declarative memory.’ Procedural memory holds information about how it is that you do your job. If you’re a data architect, this would be how you design a database. So, we have procedural memory and knowledge in our AI. The other kind of memory is declarative memory, which has two pieces: episodic memory (i.e., the events of your life—you got up this morning, you had coffee, we’re having our conference call, etc.) and semantic memory, consisting of not only word concepts, but also sentences and stories.”
An outline of how VST’s AI operates using components of meaning to understand concepts within the larger context of episodes and stories. (Image courtesy of VST.)
He added, “In our view, cognition rests upon words, sentences and stories, and how all those things interact is what constitutes procedural knowledge. Our AI right now does procedural things like engineer databases and write software codes. But it also understands and can memorize, learn and classify customer data.”

But how does this translate into an AI algorithm? After all, though enterpriseMind may be based on human cognition, it must be turned into computer science at some point.

To program an AI to think like a human around a certain task, such as designing complex databases for the data engineering industry, VST writes software that follows the steps that a data engineer would follow in creating a database. This forms the basis of the AI's procedural memory.

VST’s data engineering software interface, pictured here to give readers an idea of the method by which data engineers interact with the AI. Note that it looks, more or less, like a complex computer program. (Image courtesy of VST.)

“A data engineer can pretty much report to you what it is that they know,” Doyle said. “You can take them through knowledge engineering sessions: How do you design a database? How do you write code for data engineering? How do you write SQL code? How do you integrate two databases? We get that information by interviewing and then building knowledge models around what data engineers do.”

129 Meanings for “Run”

The next, and possibly more important, task is enabling the AI to make actual sense of the new data that it might encounter. VST's cognitive computing software understands the relationships found between and within words, sentences and stories. This is what separates VST's technology from almost every other AI that has been built.

As Doyle explained, “Language, as human beings use it, is inherently ambiguous. The word ‘run,’ which seems like a pretty simple word, has 129 different meanings. The word ‘party’ has nine different meanings. On average, every English word has at least seven meanings. That ambiguity, however, is concealed from us as users of natural language because we’re disambiguating. ‘Disambiguation’ is the process of eliminating uncertainty from a sentence, phrase, or linguistic unit. So, as we use language, we’re removing the uncertainty from the meaning of a sentence, phrase or linguistic unit. We have built enterpriseMind to do just exactly that. You can't do language unless you can do disambiguation very powerfully.”

Whereas most AI relies on keywords found in text and then runs a script based off those keywords, enterpriseMind can infer meaning. This is why, when you ask Siri the answer to a question, she simply pulls search results related to a specific word in your question, using individual programming scripts to disambiguate each meaning of a word. With questions like, “What's the weather like?”, this isn’t such a bad strategy, but for more complicated questions, such as, “How far is the Earth from Mars?”, the
answer may not quite be nuanced enough.

As VST explained in a recent white paper comparing enterpriseMind to Google search and Siri, the distance between Earth and Mars is not fixed, but varies with the rotation of the planets around the Sun. The paper explains, “Simple Search answers the question about Mars' distance without understanding the meaning of the concept-word Mars, treating it just like the distance to the nearest Starbucks. But Mars is not at all like Starbucks.”

Allan Elkowitz, vice president of software development, elaborated on how VST's technology differs from other AI on the market. “What differentiates our software from just reading scripts is that enterpriseMind understands the meaning of what has been said or what has been written,” he said. “If we're handed a structured database that has a bunch of columns, we can run them through our machine intelligence and determine what the meaning is of what's in each column. If you want to merge two insurance companies and they have two different databases and have different names for the same thing, we can interpret the meaning of what's in the database and then merge those two databases.”

Doyle used the example of a New York Times article discussing the 1993 World Trade Center bombing as an example. After the text from the article was fed into his software, the AI was able to break the story down into relational concepts, such as the “suspect” of the bombing and the “victims” of the attack, by detecting the meaning of each word as it would relate to the larger story of the article. Then, when asked who the suspect was, the AI was capable of picking the answer out from the text.

Loading an AI with Knowledge

Before the AI can begin to approach a complex topic like data engineering, insurance or education it is necessary for the program to have extensive knowledge around that topic. For this reason, VST loads the AI with a “knowledge library” about a given topic. In the case of insurance, for instance, the AI knows what “insurance”, a “policy”, and a “first notice of loss” are. enterpriseMind is, as far as the VST team knows, the only technology that does this.

 Finally, the AI is able to write its own software. In other words, it is capable of machine learning. As it encounters new information, the platform is able to rewrite its own code in order to incorporate this data, adding new rules and exceptions.

With a knowledge library acting as the starting point for the AI’s understanding of a topic, a meaning engine to decipher and understand that information and the ability to learn and write its own programming, the AI makes it possible to automate very advanced human tasks, like managing massive amounts of data. This frees up human engineers and insurance experts to perform other essential duties.

 So far, this has allowed VST to develop AI for performing underwriting and risk analysis, create and manage data models, and monitor and manage infrastructure data. In this environment, users interact with enterpriseMind through typical drop-down menus and selection screens. For the AI to become a tutor, however, VST is introducing a natural language interface to the program so that students and teachers can speak to it like they would another human.

Tutoring and the Dynamics of Language

EIS is a NewSpace education program dedicated to providing free education to all through the creation of the massive online EIS Academy. In addition to offering resources, classes and access to experts in a variety of cutting-edge fields, EIS is working with VST to develop Ali, a personal tutor that will help students and teachers worldwide.

Doyle pointed out that for an AI like Ali to interact with a human using natural language, it is essential that she write her own programming. “There is no magic in human cognition, and there is no magic in Ali, just a different approach to engineering. Autonomous human simulations like Ali change their own software as they engage in human dialog, learn knowledge and have episodic experiences… In principle, there is no way to anticipate what will happen before the event in human dialog and human learning… We solve this by building Ali so that she writes her own scripts, i.e., she writes her own software and makes up her own linguistic utterances,” Doyle said.

Humans engage in conversation in a dynamic manner, spontaneously generating sentences and making judgements, goals and choices. Unlike most software, humans don’t have scripted conversations. Doyle added, “That’s what so special and unique about what we’ve done.”

Ali will use natural language through the use of Text to Speech, Speech to Text and Watson Foundation SPSS machine learning APIs from IBM Watson Content Analytics that VST obtained about a year ago. VST COO and Co-founder Taffy Holliday spoke to the power of these APIs to enable Ali as a cloud-based tutor: “We are using some of the APIs from the IBM Watson platform, leveraging what the IBM Watson team has done, building on top of some of that foundation and, at the same time, going beyond it. We’re also working with IBM Solution Architects to deploy the solution to IBM cloud datacenters around the world, so that, regardless of where a student is, they will have access to Ali.”

A PowerPoint slide of how VST’s meaning engine is plugged into Watson products to translate unstructured data into structured data. According to Doyle, about 80 percent of data that businesses are increasingly dependent on analyzing is unstructured.

Teaching Ali to Teach

To implement the enterpriseMind platform for use in education, VST won’t be modeling the way databases are engineered. Instead, the team will be modeling how teachers teach and students learn. To do so, EIS will allow Doyle and his knowledge engineers to study 10-minute teaching sessions conducted by a variety of expert teachers selected by Lynne Zielinski, education manager for EIS and one of the most decorated space educators in the world.

After studying multiple teachers teaching the same topic to a number of different students, Doyle believes that VST will have an effective model of what knowledge is being communicated and what procedural tasks are being executed by the teacher and by the student. Then, Ali will actually be put out into the world, where she will teach students, all while learning and correcting her programming in the process.


Elkowitz described the process as follows: “We take what the teachers say and do and build process models of success so that, as other questions come up (questions perhaps that haven’t even been answered), Ali will follow this process model that says, ‘Ok, the question’s been answered. Here's the words. Here’s the question. Here are some things I need to know. Here's where I need to go to find the answer. I've found the answer. Now, let me convert that to speech and report back to the student.’”

Additionally, Ali will rely on other machine-learning algorithms that will process data related to a user's face or voice to be able to recognize them for subsequent teaching sessions. Not only that, but, as Ali interacts with a user, she will also be able to pick up on the student’s learning style and ultimately cater her approach to meet the student's needs.

“Think what it would mean for developmentally disabled kids,” Alice Hoffman, project manager for EIS, added. “This tutor never tires of repeating the lessons in different ways until the child gets it. For handicapped kids, blind kids and deaf kids who need stimulation in a different way all the time throughout their life, Ali will be able to teach to these needs.”

A Better AI Mind for a Better Human Mind

VST will be training Ali to tutor in the next few weeks, as EIS Deputy Education Manager Francis Dellutri performs one-on-one tutoring tasks with five students. The sessions will be subsequently translated into text with which VST’s knowledge engineers will develop process models that can be used to train Ali.

To see how Ali is processing data, VST has created an added bit of programming that actually spits out knowledge maps. These provide visual insight into the relationships that Ali is forming within and between words, sentences and stories. If one of VST's knowledge engineers encounters a glaring error regarding how Ali is dealing with the information she encounters, they can look at the map, understand what went wrong and then rewrite her code.

Really, these visualizations are a glimpse into Ali’s brain. If EIS can obtain the necessary funding to complete Ali, Doyle would also like to be able to map Ali’s brain against human brains using functional MRIs (fMRIs). Jack Gallant, a psychologist at the University of California, Berkeley, has already used fMRIs to demonstrate that the human brain experiences increased blood flow in specific regions of the brain when listening to words.


Doyle aims to expand on this work by performing fMRIs on teachers while teaching and students while learning, with the goal of mapping blood flow associated with not just words, but sentences and stories. Ideally, this data could be compared to how Ali connects words, sentences and stories and provide evidence that her simulation of a human brain behaves like the real thing.

While this will provide empirical data towards the study of AI, the funding of the EIS program will mean the creation of a personal tutor for the over 100 million students on the planet currently without access to any education. If universal Internet access really is in place by 2020 through groups like OneWeb and ONE, these students will be able to log onto a computer at a library or use a parent’s smartphone to learn with Ali.

The AI will additionally benefit students and teachers in established classrooms. As Doyle said, “We envision Ali as a tutor that will free teachers up to deal with problems so that students can engage in their own learning and really take it in within the dimensions of the course curriculum and time any direction they want to. They may want to go more deeply into one subject than another. As long as there’s time, they can do it. If Ali is on a tablet, students can take the program home and do what I used to do when I was a kid with a book, which is take it home from school and read it under the covers with a flashlight.”

EIS has already raised $27.5 million in in-kind donations. If EIS can obtain the $32 million more needed to launch its NewSpace education program, not only will the organization be able to train Ali, it will also deploy Ali as part of the 3D-printed NSS Enterprise, which will carry over 100 active and passive experiments from students K through postgrad into low Earth orbit. Ali will act as the voice of the spacecraft so that international student teams can ask her questions regarding their experiments. She will even be responsible for activating some of the projects aboard the NSS Enterprise.

Those that believe in free education and advanced educational tools like Ali can help bring her to life. Small donors can donate $20 to EIS to become “virtual crew members” and have their names flown aboard the spacecraft. Major donors, however, have the opportunity to choose Ali's visage and her voice and have their brand located on Ali's access portal. Donors large and small will be able to say they had a part in developing the first true artificial intelligence.