xAI Conversation

xAI Conversation

for Home Assistant.


GitHub Release hacs


November 21, 2025 Update :sparkles:

:rocket: xAI Conversation v1.1.0 just dropped!

  • All latest Grok models instantly available in config flow
  • Auto-detect sensor for new Grok models (the second xAI releases them)
  • New Grok-powered Image Analysis (vision) service
  • Pricing sensors for every model (input / output / cached per 1M tokens)

Full changelog → Releases · pajeronda/xai_conversation · GitHub


October 19, 2025 Update :sparkles:

I present to you my new project, xAI Conversation, much more elaborated than the previous one, which consists of two parts:

  1. a. an assistant for Assist conversation with peculiar features, such as conversation memory on the xAI server (meaning I only send the last message, not the whole conversation, as is the case with the Home Assistant standard), conversation continuity between the same user’s devices (switching from PC, tablet, to smartphone for the same user), and executing multiple actions in the same request;
  2. b. an integrated code editor optimized for Home Assistant, in a card for your dashboard, for assistance, creation, and modification, based on xAI’s “Grok Code Fast 1” model.

You can find everything in these repositories:

  1. https://github.com/pajeronda/xai_conversation
  2. https://github.com/pajeronda/grok-code-fast-card/

And much more. I hope you can benefit from it! :wave:

Assist

Grok Code Fast

Sensors

4 Likes

:dart: Overview

This release significantly improves reliability and performance of the integration, with special focus on optimization for resource-constrained devices (Raspberry Pi, Proxmox VMs) and critical bug fixes.


:sparkles: What’s New

:rocket: Performance Improvements

  • Separate API vs Local Timing: Now you can see how much time the xAI API takes vs local processing
    • Helps identify if slowness comes from the API or your system
    • Visible in INFO logs chat_start and chat_end with api_time and local_process_time
  • Faster Operations: Save operations (tokens, memory) no longer block conversations
    • Everything is saved in background while you continue talking to the assistant
    • Less waiting time between voice commands
  • Reduced System Load: Optimizations for Raspberry Pi and low-power devices
    • Reduced CPU usage when not necessary
    • Better log handling (no resource waste when logging is disabled)

v2.0.0

December 9, 2025 Update :sparkles:

:christmas_tree: Christmas Release Changelog

This release introduces significant changes to the core architecture to improve scalability, maintainability, and feature robustness.

  1. Gateway Refactoring: XAIGateway is now centralized and entity-agnostic. It manages the configuration for all services.
  2. Tools Refactoring: Added local custom tools to trigger automations, start scripts, handle binary/text/number input, and added xAI Agent Tools: web_search, x_search, and code_execution.
  3. Services Refactoring: New TokenStats class for better separation of concerns between Manager → Storage → Presentation layers.
  4. Billing Tracking: Implemented xAI tool consumption tracking within TokenStats and created the XAIServerToolUsageSensor sensor.
  5. New Model Notifications: Persistent notifications when xAI releases new models. It will be directly available in the services config flow.
  6. I/O Optimization: Minimized disk I/O operations.
  7. Config Flow & UI: Updated live search options, added a boolean option “Show citations in chat” when using xAI tools, and added other technical parameters to the xAI Token Sensors config flow.

v2.1.0

December 13, 2025 Update :sparkles:

New xai_conversation.ask

xai_conversation.ask service allowing stateless LLM queries with raw input data and system instructions, returning the response directly in a variable. It stems from my need to have bulletins on various services, such as the weather, processed into natural language by the LLM.

   service: xai_conversation.ask
   data:
     max_tokens: 800
     temperature: 1
     instructions:  "{{ instructions }}"
     input_data: "{{ data_to_send }}"
   response_variable: output_ai

Full changelog: Release v2.1.0 · pajeronda/xai_conversation · GitHub

Release 2.2.0 :rocket:

A special tribute to jekalmin for creating one of the most appreciated AI integrations on Home Assistant.

:hammer_and_wrench: Extended Tools Support

We are excited to introduce full support for Extended Tools! You can now define custom tools using the YAML format from the Extended OpenAI Conversation integration.

Global Configuration: Define your tools once in the integration settings.
Per-Agent Control: Enable Extended Tools for specific agents while keeping others standard.
Full Compatibility: Supports scripts, templates, and advanced logic directly from Home Assistant.

:zap: Performance & Optimization

This release brings significant under-the-hood improvements to make the integration faster and more responsive:
Faster Execution: Optimized how tools are called and managed.
Streamlined Core: Refactored internal code for better stability and lower resource usage.

:broom: Improvements

Better Localization: Fixed issues with translation files for a smoother multilingual experience.
Reliability: Updated internal dependencies to ensure rock-solid compatibility with Home Assistant.

Full Changelog: Release v2.2.0 · pajeronda/xai_conversation · GitHub