Telefónica and MATSUKO have announced the release of a spatial computing experience for holographic meetings, created in collaboration with NVIDIA, that will allow creators to connect and collaborate seamlessly as holograms in real time. People can attend holographic meetings as if they were in-person and share their 3D creations with their smartphone camera, allowing them to enjoy a new and immersive mode of communication.
The Rise of Spatial Computing
Spatial computing, poised to take center stage in 2024, represents a transformative leap in how we interact with technology and perceive our surroundings. At its core, spatial computing integrates virtual and augmented reality technologies with advanced sensors and algorithms to create immersive digital experiences that seamlessly blend with the physical world.
One of the key aspects driving the rise of spatial computing is its ability to contextualize digital information within real-world environments, enabling users to interact with digital content in a more intuitive and natural manner. This means that instead of being confined to traditional interfaces like screens or keyboards, users can now engage with digital content using gestures, voice commands, and even spatial awareness.
MATSUKO and Telefónica achieved a notable breakthrough by enabling full ‘real’ presence experiences during 3D meetings. Utilizing patented single-camera holographic communication technology, MATSUKO is now introducing its product to Apple Vision Pro users, who can currently pre-order it.
The experience, which will be showcased at Mobile World Congress 2024 in Barcelona, will enable users to create their own holograms with only their smartphone cameras, as well as stream high-resolution 3D objects and scenes via 5G and Edge technology.
Leveraging 5G, AI and Edge Technology
Due to the ultra-low latency and high bandwidth capabilities of 5G networks, coupled with the processing power of Edge computing, users can capture real-world objects and scenes with their smartphone cameras and instantly transform them into high-fidelity holographic representations. This is made possible through advanced AI algorithms that can accurately map and render three-dimensional objects in real-time, leveraging the computational resources available at the network edge.
Furthermore, the combination of 5G and Edge computing enables seamless streaming of high-resolution 3D objects and scenes to users’ devices, allowing for immersive viewing experiences without the need for bulky hardware or specialized equipment. Whether it’s streaming live holographic performances, virtual tours of remote locations, or interactive 3D models, the possibilities are virtually limitless.
Moreover, the integration of AI algorithms at the edge enables the intelligent processing of holographic content, enabling real-time adjustments and enhancements to optimize the viewing experience based on factors such as device capabilities, network conditions, and user preferences.
The technology will use the NVIDIA Maxine AI developer platform to generate realistic eye contact and facial expressions, thereby increasing the sense of presence and immersion. Users will be able to interact holographically with their colleagues and 3D projects, just as if they were in the same physical space, with no delays, misunderstandings, or 2D limitations.