I think it would be interesting to be able to make Assist behave differently depending on who’s asking it for things (from the mobile app, for example). The advantage would be that it could call services (or other) with certain parameters specific to each user in the house!
For example: Play music on Spotify with Spotcast using the account of the user making the request.
It is something most would love, but it is a resource heavy feature.
The voice print can be run on the initial wakeword, but then nearly all the devices used for wakeword detection will be useless.
It can also be run on the speak after the wakeword, but it will then slow down the response considerably or you would need some heavy cloud computing to gain a reasonable response time.
Just handling the current setup for the local voice assistant can drain a high level GPU with ease.
For the moment, it’s true that this functionality is a little impractical on local hardware… But it might be possible to implement it initially with the mobile application, and it wouldn’t require any extra computational resources, since user recognition would be based on the user’s application session (his HA account).
I agree that user recognition based on voice assists on personal devices would be nice and probably somewhat easy to implement.
It will just not solve that many have location fixed voice assist devices spread around in the house too and that can not be solved in the same way.
I have thought about the possibility to use different wakewords for different persons as another option, but again it needs more power.
With the evolution happening in IT this solution should be possible within an year or two though.