Thank you for your project. Right now I try to get use to it in combination with my local LLM server. Do you have some more examples YAML examples you could provide except the weather one? Thank you so much
@maglat
Thank you so much for your interest in the project and for giving it a try with your local LLM server! I really appreciate your feedback.
You’re absolutely right — more YAML examples would be helpful, and I’m planning to add them as soon as I can. Unfortunately, time is a bit tight at the moment, but I’ll make sure to include additional examples in the near future to make it easier for everyone to get started.
In the meantime, if you have any specific use cases or questions, feel free to share them here, and I’ll do my best to assist!
Thanks again for your patience and support!
Exciting Update: v2.1.1 Release with DeepSeek Integration!
I’m pleased to announce the release of v2.1.1, which includes significant improvements and a new provider integration that I believe you will appreciate.
What’s New?
-
Major Token Handling Improvements
I’ve completely revamped how we handle tokens across all LLM providers. This means:- More reliable and predictable behavior
- Better support for models with large context windows
- Simplified configuration for users
-
Welcome DeepSeek!
I’m excited to introduce DeepSeek as our newest supported provider. Here’s why this is important:- It offers high-performance, cost-effective models
- It brings two specialized models to our ecosystem:
- deepseek-chat: Ideal for natural, context-aware conversations
- deepseek-reasoner: Perfect for complex problem-solving tasks
- It adds another reliable option for your AI-powered automations
-
Technical Enhancements
Behind the scenes, I’ve:- Simplified the codebase for better maintainability
- Improved cross-provider compatibility
- Made token limit handling more robust
Why This Matters for Your Smart Home
These updates mean:
- More reliable AI interactions in your automations
- Greater flexibility in choosing the right model for your needs
- Better performance across all supported providers
Special Thanks
I want to extend my gratitude to @estiens for the valuable feedback that helped drive these improvements!
Please share your thoughts in the comments below. Your feedback is essential for making HA Text AI even better.
Happy automating!
Is it possible to make function calls with the response of the LLM? So I create automation which triggers at certain moments and I tell the LLM to take actions on all devices I exposed to it. I know about the risk that the LLM could/will make strange decisions and maybe handle the exposed devices in a wrong way, but I really would like to test this.
@maglat, Yes, of course, it is necessary to pass the status of the devices in the initial request, ask the AI to respond in JSON format regarding the required statuses/actions, receive the response, extract the JSON, and then act on it using simple rules or templates. Below is an example, a thought process, not 100% working code. You can also ask ChatGPT to adapt it to your tasks, for example.
# Example LLM Interaction for Switch Control
action:
# 1. Prepare device states
- variables:
devices_json: |
{
"devices": [
{% for switch in states.switch %}
{
"entity_id": "{{ switch.entity_id }}",
"state": "{{ switch.state }}",
"last_changed": "{{ switch.last_changed }}"
}{% if not loop.last %},{% endif %}
{% endfor %}
]
}
# 2. Send query to LLM with structured prompt
- action: ha_text_ai.ask_question
data:
instance: sensor.ha_text_ai_anthropic_claude_3_5_haiku
temperature: 0.5
context_messages: 1
max_tokens: 4000
question: |
Analyze these device states and return optimal switch actions.
Current time: {{ now().strftime("%Y-%m-%d %H:%M") }}
User location: {{ states("device_tracker.user_phone") }}
Device States:
{{ devices_json }}
Return JSON format:
{
"commands": [
{
"entity_id": "switch.example",
"action": "turn_on/turn_off",
"reason": "Explanation in English"
}
],
"confidence": 0-100,
"error": null
}
# 3. Extract JSON response
- variables:
response: >-
{%- set raw = state_attr('sensor.ha_text_ai_anthropic_claude_3_5_haiku', 'response') -%}
{%- if raw is not none -%}{{ raw }}{%- endif -%}
json_start: "{{ response.find('{') }}"
json_end: "{{ response.rfind('}') + 1 if '}' in response else 0 }}"
json_data: "{{ response[json_start:json_end] | from_json(default={}) }}"
Explanation:
-
Device Status Collection
devices_json: | { "devices": [ {% for switch in states.switch %} { "entity_id": "{{ switch.entity_id }}", "state": "{{ switch.state }}", "last_changed": "{{ switch.last_changed }}" }{% if not loop.last %},{% endif %} {% endfor %} ] }
- This section creates a structured JSON object containing all switch entities.
- It includes the entity ID, current state, and the last changed timestamp.
- Jinja templating is used for dynamic data collection, making it flexible and efficient.
-
LLM Query
- action: ha_text_ai.ask_question data: # ... config ... question: | Analyze these device states... Return JSON format: { "commands": [ { "entity_id": "switch.example", "action": "turn_on/turn_off", "reason": "Explanation in English" } ], "confidence": 0-100, "error": null }
- This part utilizes the
ha_text_ai
integration with Claude 3.5 Haiku. - It combines natural language instructions with a clear JSON schema.
- Contextual data such as the current time and user location is included to enhance the analysis.
- The response format is explicitly defined to ensure clarity.
- This part utilizes the
-
Response Processing
- variables: response: >- {%- set raw = state_attr('sensor.ha_text_ai...', 'response') -%} {%- if raw is not none -%}{{ raw }}{%- endif -%} json_start: "{{ response.find('{') }}" json_end: "{{ response.rfind('}') + 1 ... }}" json_data: "{{ response[json_start:json_end] | from_json }}"
- This section extracts the raw response from the sensor attribute.
- It identifies the boundaries of the JSON string using string manipulation techniques.
- The JSON string is then converted into a usable object with
from_json
.
Example LLM Response:
{
"commands": [
{
"entity_id": "switch.living_room_lights",
"action": "turn_off",
"reason": "The light has been on for more than 2 hours without any movement in the room."
}
],
"confidence": 85,
"error": null
}
Usage Tips:
- Input Validation: Always validate inputs before executing commands to avoid errors:
- if: "{{ 'switch.' in item.entity_id }}"
- Error Logging: Implement error logging to track issues:
- condition: "{{ json_data.error is defined }}" action: notify.error_notification
- Testing: Start by testing with low-stakes devices to minimize risks.
- Confidence Thresholds: Use confidence thresholds to determine when to automate actions:
- condition: "{{ json_data.confidence | default(0) > 75 }}"
Here’s a quick and very simple working example of automation, where we provide the AI with the level of lighting in the room and the status of the light switch. We receive a response on whether to turn it on or not, along with justification and confidence. If the AI is confident in its answer, we then turn the light on or off.
AI Light Control | AI-powered light control based on illuminance sensor (working example):
alias: AI Light Control
description: AI-powered light control based on illuminance sensor
triggers:
- minutes: /5
trigger: time_pattern
- entity_id: sensor.kitchen_illuminance
below: 5
for:
seconds: 10
trigger: numeric_state
- entity_id: sensor.kitchen_illuminance
above: 50
for:
seconds: 10
trigger: numeric_state
actions:
- action: ha_text_ai.ask_question
metadata: {}
data:
temperature: 0.5
instance: sensor.ha_text_ai_anthropic_claude_3_5_haiku
context_messages: 1
max_tokens: 4000
question: >
Should I turn the light on or off? Current illuminance data: {{
states('sensor.kitchen_illuminance') }} LUX
Current light switch state: {{ states('switch.kitchen_left')
}}
Return JSON format: {
"commands": [
{
"entity_id": "switch.kitchen_left",
"action": "turn_on/turn_off",
"reason": "Explanation in English"
}
],
"confidence": 0-100,
"error": null
}
- wait_template: >-
{{ state_attr('sensor.ha_text_ai_anthropic_claude_3_5_haiku', 'response')
is not none }}
timeout: 30
continue_on_timeout: false
- variables:
response: >-
{%- set raw = state_attr('sensor.ha_text_ai_anthropic_claude_3_5_haiku',
'response') -%} {%- if raw is not none -%}{{ raw }}{%- endif -%}
json_start: "{{ response.find('{') }}"
json_end: "{{ response.rfind('}') + 1 }}"
json_str: >-
{{ response[json_start:json_end] if json_start != -1 and json_end != 0
else '{}' }}
json_data: "{{ json_str | from_json if json_str else {} }}"
- choose:
- conditions:
- condition: template
value_template: "{{ json_data.confidence | default(0) >= 80 }}"
- condition: template
value_template: "{{ json_data.commands[0].action in ['turn_on', 'turn_off'] }}"
sequence:
- target:
entity_id: switch.kitchen_left
data: {}
action: switch.{{ json_data.commands[0].action }}
mode: single
max_exceeded: silent
LLM response for reference (traceback):
response: |-
{
"commands": [
{
"entity_id": "switch.kitchen_left",
"action": "turn_on",
"reason": "Illuminance level of 1 LUX indicates very low light conditions, suggesting the need to turn on the light for better visibility and comfort"
}
],
"confidence": 90,
"error": null
}
json_start: 0
json_end: 309
json_str: |-
{
"commands": [
{
"entity_id": "switch.kitchen_left",
"action": "turn_on",
"reason": "Illuminance level of 1 LUX indicates very low light conditions, suggesting the need to turn on the light for better visibility and comfort"
}
],
"confidence": 90,
"error": null
}
json_data:
commands:
- entity_id: switch.kitchen_left
action: turn_on
reason: >-
Illuminance level of 1 LUX indicates very low light conditions,
suggesting the need to turn on the light for better visibility and
comfort
confidence: 90
error: null
I hope that with the examples, it has become clearer how to solve tasks related to home management and scenarios involving AI based on my integration.
This is so great! You opened my eyes with this example! Many thanks!
May I suggest for you to open “discussions” and Wiki on your github? This would help people to contribute things - not everybody is comfortable in pushing github pulls to contribute.
This way, all the examples could be also in structured wiki.