I’m also interested in this with WearOS.
As to how a phone handles sleep, I completely agree, it’s 90% bogus. How can you phone know you are asleep if you aren’t a millennial? Some of us put our phones down for hours at a time.
A watch has sensors and firmware dedicated to sensing sleep state, not just motion/accel data.
So far all I have found are third party loggers which will take the sensor data and publish it to multiple fitness tracking cloud apps. On the other end of the spectrum there is the the WearOS APIs, transferring data to the phone and then publishing it via another android app on MQTT, maybe…
Some of the data off the sensors in raw format is actually “raw” sensor data. So it’s possible, for example, you won’t get a heart rate what you’ll get is a stream of values detected by the light detector, which needs minimal signal processing to convert to a heart rate. The values from the SPO2 sensor might be even harder to process. To summarise I believe a lot of the raw signal processing for the sensors is included in proprietary code in the fitness trackers. As to whether there is an open source library for signal processing that needs done on the watch, I don’t know.
The idea would be for the periodic scans to run through the biometrics scans, using a library (if we can find it) to convert raw sensor data into heart rate, motion, sleep state, stress level etc. Then use an Android data transfer API to send that data as JSON to the phone. Pick it up there (by subsribing to that data channel), and publish it by MQTT.
The periodicity and efficiency of this will determine the battery impact.
For instance a desirable event might be “hasStartedSleeping” or “hasAwoken”. Then a half hourly update would probably be sufficient for many automation tasks. If you want more instant response, like to turn the lights off, I think you will struggle.
Once you have spent the battery on the watch doing the initial signal processing, what you do on the phone will also matter. Android (and Apple) have fairly aggressive power saving systems and expecting the Wifi to be on, or even to wake up for you app is… optimistic. Apps may have to schedule their transfers and simple sleep waiting on the Wifi being available.
In short, having to do it from the ground up would be pretty difficult. What we need to do is find off-the-shelf components for as much of the chain of communication as possible. Definitively starting with getting pre-processed values for biometrics as opposed to raw sensor data. Another component would be a generic data transfer or event API, maybe Tasker / WearTasker could be used or modified for that… similarly Tasker could be employed on the phone for responding to state changes and events based on the data received from the watch.
One interesting class of application will be my next research focus. They are sensor aggregators. Some fitness types have more then “one” sensor, like a watch. Some have heart chest strap sensors and blood pressure sensors etc. Apps which intend to aggregate this data into a single display… will have code and functionality we could use.