The advancement of edge computing, along with increasingly powerful chips, may make it possible for artificial intelligence (AI) to operate without wide-area networks (WAN).
Researchers working on a project at the University of Waterloo say they can make AI adapt as computational power and memory are removed. And indeed if they can do that, it would allow the neural networks to function free of the internet and cloud — the advantages being better privacy, lower data-send costs, portability and the utilization of AI applications in geographically remote areas.
The scientists say they can teach AI to learn it doesn’t need lots of resources.
The group claims to be doing it by copying nature and placing the neural network in a virtual environment. They “then progressively and repeatedly deprive it of resources.” The AI subsequently evolves and adapts, the team members say in a news article on the school’s website.
The engine essentially learns to work around the fact that it doesn’t have huge resources to draw on — AI typically uses a lot of power and processing capability.
“The deep-learning AI responds by adapting and changing itself to keep functioning,” the researchers say.
Making AI smaller
Whenever computational power or memory is removed from the school’s experimental AI, it becomes smaller and is thus “able to survive in these environments,” says Mohammad Javad Shafiee, a research professor at Waterloo and the system’s co-creator.
Fitting the deep-learning engine onto a chip for use in robots, smartphones, or drones — where both connectivity and weight can be issues — are possible uses for the technology, the researchers say.
“When put on a chip and embedded in a smartphone, such compact AI could run its speech-activated virtual assistant and other intelligent features,” the news article continues.
The University of Waterloo’s stand-alone AI isn’t the first edge-ified AI that we’ve seen, though. Unrelated to the Waterloo project, Intel earlier this year launched its Movidus Neural Compute Stick.
That ground-breaking, no-cloud-required, plug-and-play neural compute device (retailing at under $100) is geared towards prototyping and then deploying neural vision networks at the edge with no internet needed. It’s no larger than a computer memory stick.
Gaining momentum from that launch, Movidius’s technology is also being used in Google’s upcoming Raspberry Pi-based hobbyist AIY Vision Kit, a do-it-yourself neural vision processor for the Pi camera that costs less than $50. It, too, is portable, simply requiring the Pi computer, camera and the Movidius-running, VisionBonnet Raspberry PI add-on board. Again, no network is needed. The Google TensorFlow-based software can recognize common objects, faces and animals. Movidius’s vision processing can also now be found in security cameras, drones and industrial machines.
In the case of the University of Waterloo’s AI project, the researchers say they have been able to obtain a 200-times reduction in the size of overall deep-learning AI software for object recognition.
Add to that the absence of a need for a network, and “this could be an enabler in many fields where people are struggling to get deep-learning AI in an operational form,” the Waterloo scientists say.