DIY audio sensor

I wish to detect events that have a characteristic audio signal and display these as a binary sensor (detected/not_detected) in HA. My hope is that a simple microphone could then be used to detect a wide range of events, for example doors slamming shut or when the tumble dryer is on.
I have purchased a cheap (5 euro) audio sensor that outputs an analog signal and hooked it up to an Ardiuno and can visually detect events, as shown below. Obviously the bandwidth isn’t going to be fantastic, but I can sum the signal within a second to produce an indicator of sound level. I would like to know if anyone has attempted something similar, or can recommend a package (python or otherwise) to detect/characterise audio events with this kind of hardware?
Cheers

2 Likes

I once have attempted to do something similar, but at least in my case I wasn’t really pleased with the results.
I started doing it in JavaScript so I could embed it into the tablets I have around the house. But the microphones in the tables weren’t good enough. Besides that, it’s actually quite hard to identify specific sounds. Instead of using the waveform I went with the spectrogram, because for me it includes the pitch of a sound in a way that seems easier to process: http://www.smartjava.org/content/exploring-html5-web-audio-visualizing-sound

My plan was to make some sort of frequency-fingerprints (essentially an array with 0 for silence and 1 if a given threshold was exceeded, from 0Hz to whatever the device is capable of) and match them to the current input. But as it turns out, you have to use a quite high “resolution” to get the details needed to indentify a sound. And for that the tablets I have are underpowered. So I guess my JavaScript route was destined to fail.
What I originally wanted to achive: Instead of vocally expressing what I want to happen (Alexa, do this etc.), I wanted to just whistle or make some clicking-patterns which should then publish something via MQTT.

The Idea came from here: https://www.youtube.com/watch?v=glZnkpIDWSE

Thanks for the link, that might be my new favourite home-automation video (sorry @bruhautomation) :slight_smile:
He appears to be using a pi with a regular microphone, but clearly the whistle is required to give signals that are both loud and have a well defined pitch.

Perhaps a simpler approach would be to just detect patterns of sound, as he does, without worrying about pitch. For example, when I tap on my wooden desk, the audible signal detected by the microphone (flush against the desk) is very clear. Perhaps a triple tap on the desk to trigger an automation.

Returning to the original project pitch, for characterising particular sounds I may need a dedicated sound card + high bandwidth microphone.

Unless the signal you are after is only characterizable with very subtle difference in baseline noise, I think you’ll find that you won’t need a very expensive microphone at all. I’ve used the PyAudioAnalysis library for a project which can detect specific sounds made during the coffee roasting process and mark the confidence of “hearing” those sounds in a coffee roaster control application.

PyAudioAnalysis is, as it’s name suggests, a Python library but not knowing Python my project consists entirely of DOS batch files. I’m certainly not suggesting you should go that way, rather that it’s possible to use even the stupidest of development environments to make something work here. The trick is to record a large enough corpus of samples of your target sound along with environmental sounds, then split them up into catagories based on “target sound” and “not-target sound” (or as many categories as you want), then to train one or more models on your existing corpus.

The process is surprisingly straightforward and I would suspect it’d be even more so on rPi as there are ALSA-based tools that allow for real-time audio stream processing which I wasn’t able to use under Windows.

2 Likes

I have an implementation of this running now.

I know this thread is quite old, but I much googling didn’t turn up an implementation that accomplished what I wanted. I’m new to homeassistant, and this seemed as good a place as any to chime in.

I have an old doorbell that is part of an intercom system from M&S (Linear.) Like most intercoms, it’s completely proprietary, there is zero documentation about it, the manufacturer no longer makes it, and in fact this ay be defunct. But, it’s actually useful, and I don’t really want to replace it because I’m cheap and hate sheetrock work. So, without any hope of patching in directly, I decided to “listen” for the doorbell.

Many searches led me to this thread. Thanks @danielperna84 for the Sufficiently Advanced hint. Following that video led me to hack together a python script that listens for my doorbell and communicates with Home Assistant via mqtt. Code is in github if anyone is interested. It’s rough and still needs cleaned up, but it does work, and should be modifiable for simple sounds. I’m going to try to listen for smoke alarms next…

Thanks go to Allen Pan (Sufficiently Advanced) Benjamin Chodroff who Allen’s work was based on, and darkerego for the code. Thanks to you all for this thread and the inspiration and direction.

</First post>

2 Likes

I was looking to do the same thing and I stumbled on your post. I checked your code and I am trying it to recognize the “street” intercom I have in my apartment which is part of a whole building system so I cannot replace it with a smart one, to get notifications.
I am having some trouble tuning it to the ringtone of the intercome, can you give me some pointers on how to tune it?

I “tuned” mine using the information from Benjamin Chodroff here:
https://www.benchodroff.com/2017/02/18/using-a-raspberry-pi-with-a-microphone-to-hear-an-audio-alarm-using-fft-in-python/ and Allen Pan’s modification here: https://www.raspberrypi.org/blog/zelda-home-automation/.

Basically, I recorded the doorbell with audacity, found the frequencies I needed, then started playing around with durations to try to get it to match. There is some maths to the duration configuration - I got close and then tweaked by trial and error. (Suggest you do that part without anyone else home - just sayin)

What problems are you having? Happy to try to help if I can…

Thanks for getting back to me!
I was having difficulties tuning to the frequencies of the intercom, which are mainly 2 but for some reason even though it was detected in debug mode, the script did not show the tone as played at all.
In the end I kind of solved it as there is one specific sound (frequency) that stands out in my intercom, which is pretty loud and repeats in a pattern.
So I reverted back to Benjamin Chodroff script and tuned it to that specific frequency, timing and pattern, snatched the mqtt client part of the script and was able to get something that so far looks reliable, although I could not perform too many tests as the intercom is pretty loud and did not want to disturb the neighbours. Also in these pandemic times, not many visitors!

Thanks again for the offer to help though!

Hy,
could you write exactly how you did it?

Hello All, @vrazumihin I am also insterested to know the different linux commands to apply for following the kbowman process; can someone who succeeds describe a little more ?
I thank you in advance

Thanks for this!
I forked your repo and was quickly able to detect the beep pattern from my rice cooker.

I’m not finished yet (I suck at python), but with a little bit of tweaking it’ll work.

1 Like