Volograms launches Volu app for easy AR and VR content creation from your phone

Artificial intelligence (AI) and volumetric video firm Volograms has launched the public version of its 3D content creation app, Volu. Using deep learning and computer vision technologies, its AI-powered mobile content creation platform enables smartphone users to easily create, share and play with immersive, dynamic augmented reality (AR) and virtual reality (VR) content.

Volu is an extension of Volograms’ mission to make AR/VR content creation more accessible. Despite, according to Statista, AR users topping 90m in 2021 in the US alone, and 2.4bn mobile AR users estimated worldwide by 2023, the company notes that content creation tools for AR and VR have to date been too inaccessible, expensive, complicated or rudimentary to earn mass appeal.  

“Augmented and virtual reality will change our communication patterns and our everyday lives, similar to the advent of the internet, social media and smartphones,” said Volograms CEO and Co-founder, Rafael Pagés. “Eventually, it will be ubiquitous. However, just like every other technology leap, first it needs to become more accessible. Putting the power of dynamic 3D content creation into every hand, pocket or purse carrying a smartphone with our Volu app is the first step. We are enabling user-generated content for AR by turning standard smartphone cameras into AR-ready cameras.”

Based on feedback from thousands of Volu beta users around the globe, Volograms fine-tuned key capabilities within the new version of the app before general release. This included enhancing reliability, speeding 3D reconstruction, incorporating a self-capture timer, and adding new creative effects and algorithms to improve the quality of critical details such as facial features. In addition, the company is working to add a more advanced sharing functionality for easier co-creation and access for Android devices.

App features include single-view volumetric capture, which allows for 3D reconstruction from a single, mobile camera viewpoint. Automatic foreground segmentation eliminates the need for a green screen and allows app usage in uncontrolled environments, even outdoors. Markerless estimation powers 3D-skeleton estimation to properly capture human movement without any additional equipment or sensors.

The app also offers compatibility with advanced sensors, including LiDAR, to provide depth-based perspective correction and generate more accurate results; comprehensive cloud computing processing and keyframe-based sequence encoding and compression; and integration with machine learning tools to make processing mobile and eventually allow the move to real-time streaming with 5G.

Array