The New Artificial Intelligence Of Car Audio Might Improve More Than Just Tunes

Hollywood has perennially portrayed Artificial Intelligence (AI) as the operating layer of dystopian robots who replace unsuspecting humans and create the escalating, central conflict. In a best case reference, you might imagine a young Hailey Joel Osment playing David, the self-aware, artificial kid in Spielberg’s polar-caps-thawed-and-flooded-coastal-cities world (sound familiar?) of AI: Artificial Intelligence who (spoiler alert) only kills himself. Or maybe you recall Robin Williams’s voice as Bicentennial Man who, once again, is a self-aware robot attempting to thrive who (once again on the spoiler alert), ends up being his only victim. And, of course, there’s the nearly cliché reference to Terminator and its post-apocalyptic world with machines attempting to destroy humans and, well, (not-so-spoiler alert) lots of victims over a couple of decades. In none of these scenarios, however, do humans coexist with an improved life, let alone enhanced entertainment and safety.

That, however, is the new reality. Artificial Intelligence algorithms can be included into audio designs and continuously improved via over-the-air updates to improve the driving experience. And in direct contradiction to these Hollywood examples, such AI might actually improve the human’s likelihood to survive.

Just For Pleasure

Until recently, all User Interface (UI) including audio development has required complex programming by expert coders over the standard thirty-six (36) months of a vehicle program. Sheet metal styling and electronic boxes are specified, sourced and developed in parallel only to calibrate individual elements late in development. Branded sounds. Acoustic signatures. All separate initiatives within the same, anemic system design that has cost manufacturers billions.

But Artificial Intelligence has allowed a far more flexible and efficient way of approaching audio experience design. “What we’re seeing is the convergence of trends,” states Josh Morris, DSP Concept’s Machine Learning Engineering Manager. “Audio is becoming a more dominant feature within automotive, but at the same time you’re seeing modern processors become stronger with more memory and capabilities.”

And, therein, using a systems-focused development platform, Artificial Intelligence and these stronger processors provides drivers and passengers with a new level of adaptive, real-time responsiveness. . “Instead of the historical need to write reams of code for every conceivable scenario, AI guides system responsiveness based on a learned awareness of environmental conditions and events, states Steve Ernst, DSP Concept’s Head of Automotive Business Development.

The very obvious way to use such a learning system is “de-noising” the vehicle so that premium audio can be tailored and improved despite having swapped to winter tires or other such ambient changes. But LG Electronics has developed algorithms running in the DSP Concept’s Audio Weaver platform to allow voice enhancements of the movie’s dialogue during rear-seat entertainment to accentuate it versus in-movie explosions, thereby allowing the passenger to better hear the critical content

Another non-obvious aspect would be how branded audio sounds are orchestrated in the midst of other noises. Does this specific vehicle require the escalating boot-up sequence to play while other sounds like the radio and chimes are automatically turned down? Each experience can be adjusted.

More Likely To Thrive

As the world races into both electric vehicles and autonomous driving, the frequency and needs of audible warnings will likely change drastically. For instance, an autonomous taxi’s safety engineer cannot assume the passengers are anywhere near a visual display when a timely alert is required. And how audible is that alert for the nearly 25 million Americans with disabilities for whom autonomous vehicles should open new mobility possibilities? “Audio now isn’t just for listening to your favorite song,” states Ernst. “With autonomous driving, there are all sorts of alerts that are required to keep the driver engaged or to alert the non-engaged driver about things going on around them.”

“And what makes it more challenging,” injects Adam Levenson, DSP Concepts’s Head of Marketing, “are all of the things being handled simultaneously within the car: telephony, immersive or spatial sound, engine noise, road noise, acoustic vehicle alert systems, voice systems, etc. We like to say the most complex audio product is the car.”

For instance, imagine the scenario where a driver has enabled autonomous drive mode on the highway, has turned up his tunes and is pleasantly ignorant of an approaching emergency vehicle. At what accuracy (and distance) of siren-detection using the vehicle’s microphone(s) does the car alert its quasi-distracted-driver? How must that alert be presented to overcome ambient noise, provide sufficient attention but not needlessly startle the driver? All of this can be tuned via pre-developed models, upfront training with different sirens and subsequent cloud-based tuning. “This is where the overall orchestration becomes really important,” explains Morris. “We can take the output of the [AI’s detection] model and direct that to different places in the car. Maybe you turn the audio down, trigger some audible warning signal and flash something on the dashboard for the driver to pay attention.”

The same holds true for external alerts. For instance, quiet electric vehicle may have tuned alarms for pedestrians. And so new calibrations can be created offline and downloaded to vehicles as software updates based upon the enabled innovation.

Innovation everywhere. And Artificial Intelligence feeding the utopian experience rather than creating Hollywood’s dystopian world.

Author’s Prediction

Here’s my prediction of the week (and it’s only Tuesday, folks): the next evolution of audio shall include a full, instantaneous feedback loop including the subtle, real-time users’ delight. Yes, much of the current design likely improves the experience, but an ongoing calibration of User-Centered Design (UCD) might be additionally enhanced based upon the passengers’ expressions, body language and comments, thereby individually tuning the satisfaction in real-time. All of the enablers are all there: camera, AI, processors and an adaptive platform.

Yes, we’ve previously heard of adaptive mood lighting and remote detection of boredom, stress, etc. to improve safety, but nothing that enhances the combined experience based upon real-time, learning algorithms of all user-pointed sensors.

Maybe I’m extrapolating too much. But just like Robin Williams’s character I’ve spanned two centuries … so maybe I’m also just sensitive to what humans might want.

Source: https://www.forbes.com/sites/stevetengler/2022/09/13/the-new-artificial-intelligence-of-car-audio-might-improve-more-than-just-tunes/