Demo: Genie, privacy-preserving virtual assistant by Stanford

Last week we hosted the State of the Open Home and it included a demo of Genie.

Genie is an open, privacy-preserving virtual assistant by Stanford OVAL. During the impressive demo they showed it’s latest capabilities. The demos run on a Baidu speaker with custom firmware and on a Pi Zero. In both cases it connects to the Genie server running as an official Home Assistant add-on to do it’s magic.

Genie is the successor to the Almond project. With the help of various grants and sponsors Stanford is working on making Genie ready for general use.

If you want to learn more, check out the Genie website and the getting started guide on how to make your own. To get in touch with other Genie users and their dev team, check their Discord or community forums.

If you end up using Genie at home, don’t forget to share the love and share your demo’s and tutorials.


This is a companion discussion topic for the original entry at https://www.home-assistant.io/blog/2021/12/21/stanford-genie/
4 Likes

Nice work. We are waiting to support more speak languages in HA

In both cases it connects to the Genie server running as an official Home Assistant add-on to do it’s magic.

to do it’s magic.

This is not true. They say in the video that due to the processing both the text to speech/speech to text, and running their neural model, are both using a cloud based service. Please don’t mislead.

What video? If you mean the state of the open home, can you give a timestamp?

1 Like

The blog post this post is sourced from here Demo: Genie, privacy-preserving virtual assistant by Stanford - Home Assistant

Source video here State of the Open Home 2021 - YouTube with timestamp 2:55:40

DrZzs

Is all this processed locally? Like what part of it is local? Is there any part that is on a server somewhere else/

Stanford guy on left

So, the entire demo was, the genie server is running on that home assistant blue.
There are 2 pieces of it that do run in the cloud and that’s a resources thing. That’s the text to speech and speech to text.

… further talk

and then the second piece is the neural model
the semantic parser we were describing, that also requires some significant resources to be deployed

… further talk

anybody that doesn’t want to run off our cloud our code they could get rid of it and use their own instance.
You would just need some significant hardware for it like a workstation/a server, something with some GPU processing available and decent amount of memory.

As you can see and refer to the video for original source. This is not possible on any Pi, and I assume most HA users. I run my HA on a decent(ish) server but don’t have GPUs or excessive memory. To suggest it can, or to leave out this cloud element (which they further expand to be Standard servers and Microsoft cloud), is misleading at best based on the title of it as “privacy-preserving”. If my voice is going to a remote system to become text, and that text is sent to another remote system to be analysed, and resulting text is going out again to become voice, then the important aspects that I’m using it for are being shared externally and not private. That’s like Google/Amazon saying everything runs on device, just the voice, and processing is on cloud.

I think this is a great project and would love to see it be optimised and I would consider a GPU for it. But please (to the author) be upfront about those things.

3 Likes

Hey thanks for that. Hard to sit thru a long video to pick out the relevant stuff.

Is there supposed to be an official addon for Genie? I am using a supported supervised setup on amd64 with the latest (2021.12.5) core. But almond still shows up in the addon store, and no sign of genie. I am not sure how often the supervisor checks github (or wherever it looks) for new/changed addons, so it may simply be a timing issue.

This is awesome! I installed Genie - Edge and started playing with it right away!! Fantastic stuff tho I felt like I should, even tho I know it’s Edge, point out that Genie is rubbish at math. I am extremely impressed overall tho and can’t wait to see the progression of Genie!

image

1 Like

Where/how did you get the addon? I see only almond in the store.

These are the instructions I followed to get Genie-Edge. The documentation mentions that non-edge is standard with official add-ons but I didn’t see it so I went with Edge.

Thanks. With Almond I get “Your current location is unknown” to “what is 20 + 2” lol

1 Like

“What is 20 + 2… on Earth?”

I get Dad Jokes, but I did with almond too. Turns lights on and off.

Just because you don’t bother to run a proper server in your home, don’t assume everyone is like you. I run HA in docker on an unraid server that has 20TB of storage, 32GB of ram, dual nvidia 760s (old but perfect for running windows and linux VMs), etc… There are PLENTY of HA users that could easily run the backend for this.

In fact, I have no idea why anyone would think to run HA on a PI except for pure testing purposes prior to doing it more properly. No matter how user friendly HA is overall, it’s not (and never will be hopefully) purely plug and play for someone with zero technical skills.

Are you ok?

Please take the effort to read my comment fully before responding to it. You’ll see that I don’t run on a Pi.

Yes, you insinuated that they were not up front about it being optional to run fully locally or not. But yet they plainly state what would be needed to do so. You then complain about it needing any kind of decent hardware, which clearly advanced capabilities such as this would require. So I’m fine. But you Sir seem to just like to whine for no reason.

Please stop. This is hostile and toxic behaviour that isn’t appropriate for this community forum.

I said.

This is not possible on any Pi, and I assume most HA users. I run my HA on a decent(ish) server but don’t have GPUs or excessive memory.

Then you jump to attacking me with.

Just because you don’t bother to run a proper server in your home, don’t assume everyone is like you.

I’m sorry that I’m not in the luxury to have such a quality server that you are boasting to have. I’m sorry that there is a large Pi user base, even the HA Amber uses a Pi compute module which you’ve made very clear you think is stupid.

There is no need to attach me for “you don’t bother to” as if I’m too lazy, or too stupid. I never asked you for your input on what platform to run, so if it’s not going to be friendly or constructive, then please cease.

In fact, I have no idea why anyone would think to run HA on a PI except for pure testing purposes prior to doing it more properly.

Good luck to every HA Amber user out here.

As I do not wish to argue with a toxic stranger I am going to block you. I hope things get better for you.

1 Like

This post was flagged by the community and is temporarily hidden.

Just because you run a “proper” server in your home, don’t assume everyone is like you.

I personally run my HA install off a Pi 3B.
Nabu Casa themsevles feel that SBCs are a good platform for HA given they have two products that use them (the ODROID-N2+ in the Blue and the Pi4 CM in the Amber/Yellow)
And according to https://analytics.home-assistant.io/, over 50% off HA installs run off a Pi.

So plenty of people think to run HA on a Pi, and thus there’s plenty of users that couldn’t easily run the backend for this.

And as someone who gifted HA last Christmas, hopefully HA will get plug and play for someone with zero technical skills (it’ll save me some time! :laughing:)

3 Likes