An academic researcher’s talk on Monday at the Fog World Congress in San Francisco demonstrated both the limits of distributed computing structures and their critical importance to future IoT and augmented reality (AR) implementations.
Dr. Maria Gorlatova’s recent work has centered on the study of fog and edge architecture – specifically, the way in which particular methods of architecting those systems can affect latency and response time. She’s studying the differences in systems which are on- and off-campus, that have different points of execution, which seems like the academic way of saying “where the computational work is done.”
The difference between the cloud – a highly centralized architecture – and fog computing, the industry’s current term of art for systems that have the abstracted nature of the cloud, but do their actual work much closer to the endpoint than the cloud’s faraway data centers – is immense. Both fog and its close cousin edge computing are useful alternatives to the cloud architecture.
“Fundamentally, our new devices that are generating high-bandwidth traffic and high-volume, high-velocity data just cannot afford to transfer all of the data to a centralized hub for processing,” Gorlatova said.
Some of the trade-offs, she said, are already fairly well-known. For instance, many tasks that aren’t terribly demanding from a compute or network perspective are best accomplished at the edge, but the advantages in terms of latency are outweighed by the cloud’s more potent computing capabilities for more complex tasks.
“When the task is small, the response time is dominated by the communication time, and the communication time is much smaller for edge systems,” she said. “Once you talk about larger tasks, however, there are more resources in the cloud, so computing time becomes more of a component in response time and the cloud connection will be faster than the edge.”
“We also noted that connections to the cloud are much faster in on-campus conditions than they are in nearby residential areas, and this is well-known – connections from campuses to the cloud are optimized.”
It’s an important point for academic researchers, she noted. Testing systems in areas that might not have a university laboratory’s optimized network connections yields results that are much more applicable to the real-world challenges faced by businesses.
The complexity of these systems makes them hard to study, according to Dr. Gorlatova. Each is different enough that it can be difficult to draw generalizations about an architecture’s effect on response time without enough data points.
Secure, responsive augmented reality
Some of the lessons from that research can seem self-evident, but they have wide-ranging implications. Gorlatova’s example was the security problem posed by bad actors influencing augmented reality systems – for example, creating huge, obtrusive holograms that block a user’s view of the real world, creating potentially serious safety issues.
Augmented reality can be educational, useful to businesses and is set to become a mainstream technology, Gorlatova said, as soon as the headsets become smaller and more usable and the apps become slightly more sophisticated.
“This is exactly where fog would come in, as fog can very surely address all these issues,” she said.
Solutions to the vision-blocking problem, which was first described a year ago, center on fixed policy recommendations, which have to be implemented, manually, by human beings. By applying machine learning to the problem, however, an AR system could be taught to recognize when holograms are obstructing the view of a user, and simply move them out of the way, or make them transparent.
“Overall, this level of intelligence in AR systems is above and beyond what current AR systems are capable of,” she said. “And we are actively exploring several ways to address it.”
“Fog offers a natural chokepoint for reducing the resources consumed on mobile nodes in multi-user settings, as well as a natural point for making those experiences more intelligent.” She and her team are currently working on a fog-based pilot deployment for secure, responsive AR on Duke’s campus, and hopes to have a system in place early next year.