You asked about other peopleās uses for having AI. Iāve given you mine - my honest feeling is that it improves my user experience in a number of ways, it makes interacting with voice assistants easier and more error tolerant (it also understands common mis-hearings / homophones etc)
If you donāt like it, you donāt have to use it - thatās the joy of home assistant!
Is ai going to turn on the fan? Or turning down the heating? Or both?
In its current state probably both. If I use a declarative statement like āitās too hotā then the implication is that I am happy for the AI to make decisions on my behalf.
Privacy: Background: We have (privately) a Llama3-8b running using the gpt4all API. Whisper (medium/German) translates satisfactorily and quickly. At the moment the hardware is still modest with 2xXEON (20 cores) and 48MB RAM. But: The local LLMs are
definitely capable of conducting a conversation within a reasonable framework. We are now also upgrading to support by a GPU.
The LLM is to be moved to the VRAM. The tasks of our assistant are growing immensely. Asynchronous APIs such as Telethon in particular
take up a relatively large amount of computing power, as does the chat under TKinter. In Telegram our assistant responds, much to the delight of some. There the chat is supported in both text and voice mode. To save computing power, I initially opted for an MBROLA (espeak-ng). Weather and news APIs now reveal the other side of the coin: a chronically localized LLM is isolated - with all that entails! But just like you: our privacy is important to us! We shape their memory by adding old chat histories (extracted because they cannot be digested without limits by the LLM) - the amount of private sensitivities that accumulate over time is enormous. We also donāt want the morning photos that are taken for authentication by face_recognition to end up in some cloud The hardware without a GPU cost us ā¬300. Not $3000! To achieve enough tokens/sec, however, another ā¬1000 would have to be invested. But weāll try it first with an RTX 2070 8GB (will that be enough for an 8b? Weāll see) ps: a German-English translation!