I present to you my new project, xAI Conversation, much more elaborated than the previous one, which consists of two parts:
a. an assistant for Assist conversation with peculiar features, such as conversation memory on the xAI server (meaning I only send the last message, not the whole conversation, as is the case with the Home Assistant standard), conversation continuity between the same user’s devices (switching from PC, tablet, to smartphone for the same user), and executing multiple actions in the same request;
b. an integrated code editor optimized for Home Assistant, in a card for your dashboard, for assistance, creation, and modification, based on xAI’s “Grok Code Fast 1” model.
This release significantly improves reliability and performance of the integration, with special focus on optimization for resource-constrained devices (Raspberry Pi, Proxmox VMs) and critical bug fixes.
What’s New
Performance Improvements
Separate API vs Local Timing: Now you can see how much time the xAI API takes vs local processing
Helps identify if slowness comes from the API or your system
Visible in INFO logs chat_start and chat_end with api_time and local_process_time
Faster Operations: Save operations (tokens, memory) no longer block conversations
Everything is saved in background while you continue talking to the assistant
Less waiting time between voice commands
Reduced System Load: Optimizations for Raspberry Pi and low-power devices
Reduced CPU usage when not necessary
Better log handling (no resource waste when logging is disabled)
This release introduces significant changes to the core architecture to improve scalability, maintainability, and feature robustness.
Gateway Refactoring: XAIGateway is now centralized and entity-agnostic. It manages the configuration for all services.
Tools Refactoring: Added local custom tools to trigger automations, start scripts, handle binary/text/number input, and added xAI Agent Tools: web_search, x_search, and code_execution.
Services Refactoring: New TokenStats class for better separation of concerns between Manager → Storage → Presentation layers.
Billing Tracking: Implemented xAI tool consumption tracking within TokenStats and created the XAIServerToolUsageSensor sensor.
New Model Notifications: Persistent notifications when xAI releases new models. It will be directly available in the services config flow.
I/O Optimization: Minimized disk I/O operations.
Config Flow & UI: Updated live search options, added a boolean option “Show citations in chat” when using xAI tools, and added other technical parameters to the xAI Token Sensors config flow.
xai_conversation.ask service allowing stateless LLM queries with raw input data and system instructions, returning the response directly in a variable. It stems from my need to have bulletins on various services, such as the weather, processed into natural language by the LLM.
A special tribute to jekalmin for creating one of the most appreciated AI integrations on Home Assistant.
Extended Tools Support
We are excited to introduce full support for Extended Tools! You can now define custom tools using the YAML format from the Extended OpenAI Conversation integration.
Global Configuration: Define your tools once in the integration settings. Per-Agent Control: Enable Extended Tools for specific agents while keeping others standard. Full Compatibility: Supports scripts, templates, and advanced logic directly from Home Assistant.
Performance & Optimization
This release brings significant under-the-hood improvements to make the integration faster and more responsive: Faster Execution: Optimized how tools are called and managed. Streamlined Core: Refactored internal code for better stability and lower resource usage.
Improvements
Better Localization: Fixed issues with translation files for a smoother multilingual experience. Reliability: Updated internal dependencies to ensure rock-solid compatibility with Home Assistant.