Mistral AI API - Custom Integration

Hello everybody,

I just created my first custom integration. I never developed an integration for home assistant before so things might be rough, but they work and I think people could benefit from it.

So, this is the Mistral AI API for Home Assistant.

Mistral AI is a (currently) free european (french) AI. It features several fine-tuned models for text generation, image generation, code generation and more.

The integration allows you to send prompts to either the mistral ai itself or to one of the agents (which you define on their site).

All you need is to sign-up with them and get an api-key (all free).
The integration then offers a new service calles mistral_ai_api.send_prompt

You can input the prompt, define the model you wish to use (or decide to use an agent by using an agent-id) and define a custom identifier.

Once the response is available an event will be thrown which includes the reponse and the identifier of that prompt.

I used this to feed the AI my sensors for temperature and humidity along with temperature and humidity outside and then got recommendations on if I should ventilate.

However, as I didn’t want to repeat this automation all the time I then let the ai generate a template for me with my data which I then pasted into HA to have a template sensor which now lets me know if I should ventilate.

There’s lots you COULD do with this. If you decide to use the coding model you could let the ai create jinja + markdown to be used directly within the dashboard.

I am currently trying to get this to be available on HACS, but since this is my first time doing all of this things might take a while.

If you got any suggestions I’d be happy to hear them.

Here is an example script:

action: mistral_ai_api.send_prompt
metadata: {}
data:
  identifier: RĂĽckgabe
  model: mistral-large-latest
  prompt: >-
    I'm thinking bout wether I should ventilate my home. The outside temperature
    is     {{state_attr('weather.forecast_home','temperature')}} degree and the
    outside humdity is      {{state_attr('weather.forecast_home','humidity')}}%.
    The avarage temperature of all my rooms is            
    {{states('sensor.durchschn_temperatur_in_der_wohnung')}} degree and the
    avarage humidity of all my rooms is         
    {{states('sensor.durchschn_feuchtigkeit_in_der_wohnung')}}%. Should I
    ventilate? Explain your reasoning.

And here is the answer

To determine whether you should ventilate your home, it’s important to consider the effects of temperature and humidity on indoor air quality and comfort. Here’s a breakdown of your situation:

  • Outside Conditions:
    • Temperature: 2.3°C
    • Humidity: 89%
  • Inside Conditions:
    • Temperature: 18.5°C
    • Humidity: 57%

Key Considerations:

  1. Temperature Difference:
  • The outside temperature is significantly lower than the inside temperature. Ventilating by opening windows could lead to a drop in indoor temperature, making the environment colder.
  1. Humidity Levels:
  • The outside humidity is much higher than the inside humidity. Introducing this high humidity into your home could increase the indoor humidity levels, which might lead to discomfort and potential issues like condensation and mold growth.
  1. Indoor Air Quality:
  • Ventilation helps to reduce indoor pollutants, odors, and carbon dioxide levels, improving overall air quality. However, it should be balanced with maintaining comfortable temperature and humidity levels.

Recommendations:

Given the significant difference in temperature and humidity, it might not be the best time to ventilate your home by simply opening windows. Here are some alternative suggestions:

  1. Mechanical Ventilation:
  • If your home has a mechanical ventilation system (e.g., heat recovery ventilator), use it. These systems can exchange indoor and outdoor air while maintaining temperature and humidity levels.
  1. Short Bursts of Ventilation:
  • Open windows for short periods (e.g., 5-10 minutes) to allow for some air exchange without drastically affecting the indoor temperature and humidity.
  1. Monitor Indoor Air Quality:
  • Use indoor air quality monitors to check levels of pollutants, carbon dioxide, and humidity. This can help you decide when ventilation is necessary.
  1. Use Dehumidifiers:
  • If you decide to ventilate and notice an increase in indoor humidity, consider using a dehumidifier to maintain comfortable levels.

Conclusion:

Based on the current conditions, it might be better to avoid prolonged ventilation by opening windows. Instead, consider using mechanical ventilation or short bursts of ventilation to maintain indoor air quality without compromising comfort.

The answer right now is fetched from an event and then added as a permanent notification in home assitant. Here is how I do this:

alias: "[Mistral AI] React on mistral ai response"
description: ""
triggers:
  - trigger: event
    event_type: mistral_ai_response
conditions: []
actions:
  - action: persistent_notification.create
    metadata: {}
    data:
      message: "{{trigger.event.data.response}}"
mode: single
2 Likes

It’s a great job !

Do you think it would be possible either to add an option on the official openai_conversation integration to switch hosts and use Mistral’s AI models directly for Assist or either to make this integration compatible with Assist api if it’s not ?

I have no idea how exactly communication for assist is done… so, dunno. I am really new to developing HA stuff

Awesome - I just started to experiment with Mistral and was wondering if I can use my local voice assistant with it when I bumped into your integration.

There is guidance on the LLM API integration into assist - but must say this is completely new to me as well…

Home Assistant API for Large Language Models | Home Assistant Developer Docs

That looks and sounds really cool. But for now thats far too advanced for me. I still struggle with the basics in developing for home assistant…

No problem, I also have some bases in development, but it’s a huge work to make an integration for HA and I don’t think I would be able to do it myself. Thanks for your work!

I have looked at OpenAI and Antropic conversation integrations and it looks like the second one is an adaptation of the first one, so what we could maybe try would be to do the same with Mistral since it uses the same kind of api?

The problem here is that even if I adapt it well, I don’t think I would be able to maintain it, I’ve never done this kind of thing before, so if anyone would like to help ?

I have tried to set up something today with the help of Mistral :wink: - making progress but it’s a long way to go. Can query the agent and set it up with config flow. Next step would be to have it analyse the setup of Antropic and adjust for Mistral. Let’s see.

1 Like

Hey there,
I’ve seen u fork’d the repo. Seems like you are getting it ready to have multiple config entries. I tried getting that to work, but I couldnt figure out how… (as I said, its my first custom integration). May I copy some of your code?

I plan to find a way to make the conversation context aware. This will require to send forth and back all the past communication. My plan for now is to save the conversation to a text file and add a new parameter named something like “conversation_id”. All communication will then be saved to a textfile and also be sent to mistral so it gets all of the past communication to work with it.

For now though, in my last push I added a new service action to allow for retrieving the last communication. Just an alternative way to the event.

I would not copy my code :frowning: this generated with copilot but fails miserably :wink:

1 Like

We could maybe work together on a same repo (probably BlaXun’s one with different branchs ?), I also would like to contribute if I can.

I have my own (written entirely by Mistral) working. - will publish on GitHub for inspiration tomorrow. Throws one error in the log but i can submit a prompt and get the response sent to a notify Entity.

I had to guide it quite a bit but all in all mission accomplished - wanted to see if I could use AI to program an integration

Do think its quite slow to respond, much slower than Gemini 1.5 which i use as a conversation agent. So wonder if Mistral is able to act as conversation agent given the speed. I’m solely using the agent I set up, perhaps there are faster models? Or the chat? What is your experience so far?

I am using it without any issues. Though, as I said, right now the integration I wrote is only ONE answer to ONE prompt… no real conversation (thats what I am trying to implement as I write this). I am not doing anything complex with the AI right now. I did feed it my sensor data to get forumlars on when to ventilate. I also did feed it plant sensor data and then let the AI generate a jinja template from that.

I am very pleased with the model. Though, I didn’t use many other models recently… I can’t compare really. Loving this though… thinking bout getting a NVIDIA Jetson Orin Nano Super to run LLM locally

Nearly there! I was able to tell the AI that I am having 5 cakes infront of me (that was prompt #1). Ai answered and told me it would memorize. Now I sent a 2nd prompt and asked how many cakes I see (promtp #2) AI answers: 5 cakes. So, its now context aware. Still WIP though, but its working

They way I do this is by allowing for the user to assign a conversation-id to the service call. As long as you use the same conversation id the AI will work with the previous knowledge from that conversation

This is a complete conversation I just had with mistral ai all from within the custom integration:

{
“messages”: [
{
“role”: “user”,
“content”: “I have 5 cakes infront of me. 3 are pink, 1 is yellow, the other is green.”
},
{
“role”: “assistant”,
“content”: “That’s a lovely set of cakes you have! To summarize, you have:\n\n- 3 pink cakes\n- 1 yellow cake\n- 1 green cake\n\nIn total, there are 5 cakes.”
},
{
“role”: “user”,
“content”: “How many cake do I see?”
},
{
“role”: “assistant”,
“content”: “Based on the information you provided, you see a total of 5 cakes. Here’s the breakdown:\n\n- 3 pink cakes\n- 1 yellow cake\n- 1 green cake\n\nSo, the total number of cakes you see is 5.”
},
{
“role”: “user”,
“content”: “One of the cakes is huge! Its a lot larger than the others… the others are small. How many of my cakes are small?”
},
{
“role”: “assistant”,
“content”: “Given that one of the cakes is significantly larger than the others, and you have a total of 5 cakes, it means that the remaining cakes are small.\n\nSo, if one cake is large, then the number of small cakes is:\n\n5 cakes (total) - 1 large cake = 4 small cakes\n\nTherefore, you have 4 small cakes.”
},
{
“role”: “user”,
“content”: “Thats correkt. How many cakes do I have again? Also, can you tell me the colors again? Was there anything special about the cakes?”
},
{
“role”: “assistant”,
“content”: “You have a total of 5 cakes. Here are the colors again:\n\n- 3 pink cakes\n- 1 yellow cake\n- 1 green cake\n\nSomething special about the cakes that you mentioned is that one of them is significantly larger than the others. So, among your 5 cakes, there are 4 small cakes and 1 large cake.”
},
{
“role”: “user”,
“content”: “Awesome. Can you give me the details about my cakes in JSON format?”
},
{
“role”: “assistant”,
“content”: “Certainly! Here are the details about your cakes in JSON format:\n\njson\n{\n \"total_cakes\": 5,\n \"cakes\": [\n {\"color\": \"pink\", \"size\": \"small\"},\n {\"color\": \"pink\", \"size\": \"small\"},\n {\"color\": \"pink\", \"size\": \"small\"},\n {\"color\": \"yellow\", \"size\": \"small\"},\n {\"color\": \"green\", \"size\": \"large\"}\n ]\n}\n\n\nThis JSON object includes the total number of cakes and an array of individual cake objects, each with its color and size.”
}
]
}

I’ll try to push this tomorrow. Have to refactor a little. Also, I don’t know the limits of Home Assistants REST API …so, maybe at one point a conversation could become too long for the system?

I pushed the latest changes. I added a parameter that goes by the name “conversation_id”. As long as you set this and use this for further prompts the previous prompts and answers from the AI will be provided to the AI which it will allow the AI to respond to previous parts of the conversation. Please be aware that the conversation_id will be used as the name for a file in which the conversation will be stored… so, please only use allowed characters… I’ll have to see how I can check the given conversation_id if its valid…

Enjoy!

Hint:

Combine this with Nabu Casa (or local) TTS to get the reply read out loud

action: tts.speak
metadata: {}
data:
  cache: true
  media_player_entity_id: media_player.audiocast_buro_linkplay_4
  message: "{{state_attr('sensor.mistral_ai_api','last_response')}}"
target:
  entity_id: tts.home_assistant_cloud

I think I understood how function calling can be done. With this the AI should be able to request information and execute actions. Will implement ASAP but will probably take some time due to holidays and responsibilities

1 Like

You made some great progress. Did you use Mistral in any way to aid you? I had it review developer docs of home assistant and its own api. Was surprised by the result, within half an hour i had a working python script, within two hours a working integration. Not as far progressed as yours but I didn’t have to do any coding. Just prompting.