The podcasting technology firm Backtracks is adding new features to their technology stack, most notably the ability for listeners to interact with podcast and audio content simply through head movement or head gestures.
“We are beyond excited for what our newly released head movement and gesture detection capabilities mean for the future of podcasting and audio consumption,” said CEO Jonathan Gill. “Publishers and app creators can now create dynamic, interactive and totally immersive audio storytelling experiences based on human activity and gestures like nods, and the accompanying applications and opportunities of this technology are truly endless.”
Backtracks head motion and gesture detection technology works by capturing data from AirPods Pro and Apple device sensors and layering in this data with audio analytics that are processed by Backtracks audio and podcast analytics web services. Backtracks activity detection uses sensors in phones and watches to capture analytics and data. But in a nod to privacy concerns, Backtracks says it is all done in a privacy-first manner, without using any visual, camera or personally identifiable user data.
To leverage the technology, Backtracks said publishers and app creators can now create dynamic, interactive and totally immersive audio storytelling experiences based on human activity and gestures, seeing endless potential uses. For instance, a publisher will be able to detect listeners’ vertical and horizontal nods, as well as if a listener is walking, running, biking or in a moving or stopped automobile, and customize audio content.
For example, in a “choose your own adventure” listening format, users can select their audio experience almost instantaneously with a nod, without having to touch their device or use voice recognition controls. The technology can also be used to create more immersive listening experiences with responsive spatially aware audio mixes; if a user turns their head to the left, sounds can become more amplified in the user’s left earor a change in user’s movement or direction can trigger certain content to be played, including head gesture-based advertising formats where users nod to take an action.
Backtracks said its technology also has other non-gesture possibilities. It said built-in detection of Apple CarPlay and Android Auto enable the mid-stream alteration of audio based on a combination of audio and activity data.
“I think it’s fair to say that we’re entering the next era of podcasting,” said Gill. He said the feature is not unlike what Netflix’s Bandersnatch introduced when it offered a viewer the ability to interact with content in real-time to curate their own individualized streaming experience.
Backtracks head motion, head gesture recognition, and activity detection technology is part of the recently released Backtracks Native SDKs for Apple and Android. The functionality works across the entire Apple and Android platforms, including laptops, phones, tablets, watches and smart TVs.
The Austin-based podcasting company recently announced it has raised $1.6 million in previously undisclosed investor-initiated funding. In recent weeks it also debuted new features, like a “floating player” that allows users of its white label technology to navigate to other sites and have an uninterrupted listening session even as they multitask.
Read Podcast News Daily’s Q&A with Backtracks CEO Jonathan Gill HERE.