Technology 105 Views

Computer scientists develop software that can look a few minutes into the future

An article in Alpha Galileo reported a development straight out of the Tom Cruise sci-fi movie Minority Report. German computer experts have created artificial intelligence that can predict what a person is going to do in a few minutes.

The software program is trained by watching videos that depict the step-by-step process of an action. Once the machine has learned the sequence of events, it can accurately guess the next action that the performer will take.

This predictive ability has been likened to the perfect butler seen in British movies, who knows what order is on his master’s mind without a single word needing to be spoken. It was designed by Dr. Jürgen Gall, a professor at the University of Bonn in Germany, who said he and his fellow researchers wanted to predict the exact time and extent and timing of activities that are set to happen minutes or hours in the future.

A kitchen robot with predictive ability would know the right time to pass the ingredients to the human chef and when to start pre-heating the oven. It will also remind the chef of any step he may have forgotten.

Meanwhile, a Roomba or similar vacuum cleaner robot with the same predictive capability will know better than to enter the kitchen during that time. Instead, it will stay in the living room and vacuum the carpet. (Related: Study finds Minority Report-style AI used by courts for the last 20 years to predict criminal repeat offenders is no more accurate than untrained people guessing.)

Predictive AI trained on hours of salad-making videos

It is easy for humans to predict the actions that other humans will take. Computer, on the other hand, have difficulties predicting what humans will do next.

The self-learning software developed by Dr. Gall and his research team is the first of its kind. It can accurately guess the timing and duration of activities several minutes in the future.

The AI was trained on four hours worth of videos showing chefs working on different types of salads. Each six-minute-long recording featured around 20 or so different actions. Each video also showed the computer the exact time that the action started and the length of time the action took to complete.

By watching 40 videos, the AI was able to learn the sequence of actions during each task. It was also able to remember how long each action and task took.

The AI was able to learn all of this despite the differences in each chef’s approach to food preparation. It was not confused by any deviations to the standard procedure that was required by a particular recipe.

AI can accurately predict events three minutes into the future

“Then we tested how successful the learning process was,” said Gall. “For this we confronted the software with videos that it had not seen before.”

For the test, the AI was informed about the contents of the first part of a new set of movies. Based on that information, it needed to predict what would happen in the later parts of the recording.

“Accuracy was over 40 percent for short forecast periods, but then dropped the more the algorithm had to look into the future,” said Dr. Gall.

The cut-off time was three minutes. The computer managed to make correct guesses about the activity and the timing in 15 percent of the cases if it took place more than three minutes into the future.

The algorithm has a long way to go before it can reliably predict human behavior. If left to its own devices and forced to watch the first part of the video without guidance, the AI makes more mistakes in its predictions. Gall said there was noisy data that confuses the software.