Wireless infrastructures are enabling real-world AI

Wireless infrastructures are enabling real-world AI

Wireless infrastructures are bringing artificial intelligence applications to the physical world. The impacts of the ongoing ultra-densification of wireless communications were foreseen nearly a century ago by Nikola Tesla. In an interview with Collier’s Weekly on January 30, 1926, he expressed a vision: “When wireless is perfectly applied, the whole Earth will be converted into a huge brain, which, in fact, it is, all things being particles of a real and rhythmic whole.” This prediction astonishingly mirrors today’s advancements in radio technologies and the integration of distributed artificial intelligence with the physical world, a research challenge that we investigate within the 6G Flagship.

Recent developments in artificial intelligence have been propelled by the vast amount of data, including images and textual content, available from the internet. Integrating AI with physical reality could, in the long run, enable robots to perform regular household chores, from loading washing machines to making beds. We anticipate seeing innovations in this category by the early 2030s to assist us in everyday life.

When higher frequency bands are used for communications, radios can be used as sensors in addition to transmitting data. They work without illumination and can supplement and occasionally even replace cameras. The sensing modalities range from use as radars to channel analysers. With antenna arrays, radio energy can be directed precisely, enabling effective sensing strategies. Computer vision and radio communications face similar hurdles, suggesting that a unified development of solutions could lead to their seamless integration. 

Intelligent ubiquitous wireless-vision systems in our environment could improve traffic safety, for example, by providing the capability to see cross-traffic behind corners. The same communication radio-based sensing technology could be enhanced to detect medical emergencies such as ventricular fibrillation.

Our research aims at fulfilling Nikola Tesla’s vision and beyond. The bottlenecks we address include sensing, machine learning, and energy dissipation. Faithful 3-D modelling provides a digital twin that is a virtual replica of the real environment. In the virtual domain, artificial intelligence can, via trial and error, simulate combined sensing and communication, as well as learn action strategies that can be safely taken into reality.

We have built a multi-modal sensing laboratory as an experimentation environment to bridge between real and virtual representations. Efforts are underway to add humans to the simulations and experiments. Deep reinforcement learning is a promising method for handling the immense state space of the real world.

Cooper’s law, which states that the capacity of wireless communications doubles every 30 months, has held true since the first transmissions by Marconi across the Atlantic Ocean. Still, most of the increases have come from the reduced cell sizes and transmission distances. This development will scale current networks into ultradense scenarios where millions of devices should work autonomously, self-sufficiently and coordinatedly. 

The challenges of scaling distributed intelligence include the extensive collection of representative data from real-world measurements using radios and other sensors and the energy efficiency of the vast number of nodes that need self-sufficiency. 

The energy consumption of current AI methods largely depends on the general matrix operations that dominate deep-learning algorithms. Currently, we’re in discussions about showcasing the energy efficiency of our error-resilient, low-voltage matrix multiplication method to a supercomputer operator. While the solution was initially developed for MIMO baseband processors, it also applies to machine learning.

We lead the convergence of spatial AI and wireless communications from a visionary approach where multimodal sensing and 3D modelling converge, underpinned by robust partnerships and state-of-the-art experimental infrastructure. This fusion of technology and collaboration allows us to create innovative solutions for real-world applications beyond what is envisioned today.

About the Writers

Associate Professor Miguel Bordallo López works at the University of Oulu’s 6G Flagship. His work focuses on the convergence of computer vision and telecommunications, developing multimodal sensing and distributed intelligence methods enabled by 6G technology.

Olli Silvén is a former Professor and Lead of the Center for Machine Vision and Signal Analysis (CMVS) at the University of Oulu.