In a new post published on the Machine Learning Journal portal , Apple has explained in detail, and sometimes extremely technical, how Siri on HomePod has been designed to work even in not very good acoustic conditions. How does the intelligent Apple speaker behave when another source plays music at high volume or when the vacuum cleaner is running?

Unlike Siri on the iPhone , which works when the user brings the microphone close to his mouth, Siri on HomePod must work well even in harsh audio environments. Users, in fact, must be able to activate Apple’s voice assistant from many positions, near or far away. A complete online system , able to overcome all environmental problems, requires a remarkable integration of various multichannel signal processing technologies .

Siri on HomePod uses machine learning algorithms

To accomplish this, Apple says that the development team working at Siri has designed a multichannel signal processing system for the HomePod that uses machine learning algorithms to remove echo and background noise, also separating audio sources simultaneous in order to eliminate interference .

This system uses the six microphones of HomePod and is continuously powered by the Apple A8 chip , even when the device is used in its minimum power state to save energy. The multi-channel filter is able, therefore, to constantly adapt to the changing conditions of noise and to the movements of the user who interacts with the HomePod.

The three main enemies of HomePod, as well as other intelligent speakers, are echo , reverberation and noise . Following are the solutions adopted by Apple to counteract the three phenomena:

  • Echo cancellation: Since the speakers are close to the HomePod microphones, music playback can be significantly stronger than the “Hey Siri” voice command, especially when it comes from a user who is not in the immediate vicinity of the device. To combat the echo, Siri on HomePod implements a multichannel echo cancellation algorithm .
  • Removal of the reverberation: when the user pronounces “Hey Siri” and moves away from the HomePod, reverberation queues are created inside the room, which can reduce the quality and intelligibility of the voice command. To counteract this acoustic phenomenon, Siri on HomePod continuously monitors the characteristics of the room and removes the late reverberation , preserving the components of direct and initial reflection in the microphone signals.
  • Noise reduction: when pronouncing a voice command at a distance, this can be contaminated by noise coming from household appliances, external sounds and so on. To combat this, HomePod uses state-of-the- art speech optimization methods that create a fixed filter for each expression.

Apple claims to have tested the HomePod multichannel signal processing system in a variety of acoustic conditions, including playing music and podcasts in different volumes, continuous background noise such as rain or conversation, and noise coming from home appliances such as vacuum cleaners, hairdryers, and microwaves. During these tests, Apple has often changed the location of HomePod to cover several use cases.

The post concludes with a summary of Siri’s performance measurements on HomePod, with graphs showing that Apple’s multichannel signal processing system has resulted in greater accuracy and fewer errors . If you want to know more, you can consult the complete article on Apple’s Machine Learning Journal available, in its original language,  at this address .

Subscribe To Our Tech News & Newsletters

Join our mailing list to receive the latest tech news and updates from our team.

You have Successfully Subscribed!

Share This