Goal: This guide details how to create a dynamic weather card in Home Assistant. The card’s background image is generated by Google’s Gemini AI based on the current weather conditions and time of day, updating automatically. Clicking the card shows the standard weather entity details.
(This project was developed collaboratively using Home Assistant user ‘kloudy’ idea Weather Dashboard with ChatGPT & DALL-E and Google’s Gemini AI, navigating several technical challenges along the way.)
Example output…
Technologies Used:
- Home Assistant (Developed on Core 2025.3.x)
- HACS (Home Assistant Community Store)
- PyScript Custom Integration (Installed via HACS)
- button-card Custom Lovelace Card (Installed via HACS)
- Google AI Gemini API (Generative Language API)
- Home Assistant weather integration
- Home Assistant sun integration (built-in)
- Home Assistant input_text helper
Disclaimers & Considerations:
- PyScript Environment: The standard HA python_script integration proved too restrictive. PyScript was used but presented challenges with secure secret access and handling blocking file I/O (open, yaml.safe_load) reliably via task.executor. The final file saving method uses standard io.open directly, which worked in testing but carries a small theoretical risk of blocking HA if disk I/O is extremely slow.
- API Key Security: Due to issues with the !secret tag validation in the Automation UI and problems accessing secrets programmatically from PyScript, the final working solution passes the Google API Key directly as plain text within the automation’s YAML configuration. This is less secure. Implement only if you understand and accept the risk within your network environment. AppDaemon is recommended as a more secure alternative.
- Gemini Model & Safety Filters: This guide uses the gemini-2.0-flash-exp-image-generation model via the :streamGenerateContent endpoint. Testing revealed this specific model has very strict safety filters via the REST API, often refusing seemingly harmless prompts. Your results may vary. You might need to experiment heavily with prompts or find a different, stable image generation model name in Google’s current documentation and adapt the script.
Cost: Using Google AI APIs can incur costs. Check Google Cloud pricing and ensure billing is enabled for your project if required.
Prerequisites Checklist
- Home Assistant: A running instance.
- HACS: Installed and working.
- PyScript Integration: Installed via HACS and added via HA Integrations. Restart HA after install. Check PyScript config if requests or yaml imports fail later (Allow All Imports might be needed).
- button-card: Installed via HACS (Frontend section). Refresh browser after install.
- Weather Entity: A working weather integration providing an entity (e.g., weather.your_location). Note its entity_id. This guide uses weather.forecast_home.
- Sun Integration: Enabled (usually via default_config:). Provides sun.sun.
- Google Cloud Project & API Key: Project created, “Generative Language API” Enabled, API Key generated (and restricted if possible).
- secrets.yaml Entry (Recommended): Store your API Key in /config/secrets.yaml for reference, even though it will be pasted into the automation.
# /config/secrets.yaml
gemini_api_key: YOUR_GOOGLE_API_KEY_HERE
-
www Directory: The /config/www directory exists.
-
Libraries for PyScript: Ensure requests and PyYAML are available (usually are, check PyScript docs if import fails).
Step 1: The PyScript Script (generate_gemini.py)
This script handles the core logic: dynamic prompt generation, API call, response parsing, image decoding, and saving.
- Create the directory /config/pyscript/ if needed.
- Create a file named generate_gemini.py inside /config/pyscript/.
- Paste the entire following code into generate_gemini.py. Customize the prompt components as desired.
# Filename: /config/pyscript/generate_gemini.py
# Version: v24.3 (Final Corrected - Uses sun.sun, api_key arg, io.open save)
# Required Imports
import requests
import base64
import os
import json
import io
# import yaml # Not needed
# --- Helper Function for Saving Binary File ---
def _save_binary_file(path, data_bytes):
"""Standard Python helper function for binary writing using io.open."""
try:
with io.open(path, "wb") as f: f.write(data_bytes)
return True # Success
except Exception as e: return e # Return exception
# --- Main PyScript Service Function ---
@service
def generate_gemini_image(condition=None, temperature=None, sun_state=None, sun_elevation=None, api_key=None):
"""
Generates an image using the Gemini API (v24.3) based on weather and sun position, then saves it locally.
Uses API Key passed as an argument.
Args:
condition (str): Standard HA weather condition.
temperature (float/int): Current temperature.
sun_state (str): State of sun.sun ('above_horizon'/'below_horizon').
sun_elevation (float): Sun's elevation in degrees.
api_key (str): Gemini API Key.
"""
log.info(f"PyScript: Received request (v24.3) condition='{condition}', temp='{temperature}', sun='{sun_state}'({sun_elevation}°)")
# --- Validate Input Arguments ---
if not condition: log.error("PyScript: Condition missing!"); return
if temperature is None: log.error("PyScript: Temperature missing!"); return
if not sun_state: log.error("PyScript: Sun state missing!"); return
if sun_elevation is None: log.error("PyScript: Sun elevation missing!"); return
if not api_key: log.error("PyScript: API Key missing!"); return
if not isinstance(api_key, str) or len(api_key) < 10: log.error(f"PyScript: Invalid API Key provided."); return
# --- Generate Dynamic Prompt using Weather and Sun ---
subject = "a very cute, happy and playful kitten"
base_style = "whimsical illustration style, sharp focus, high detail, crisp image"
weather_action_scene = ""
time_of_day_desc = ""
generated_prompt = ""
try:
condition_lower = condition.lower()
elevation = float(sun_elevation)
# Determine Time of Day
if sun_state == "below_horizon":
if condition_lower == "clear-night": time_of_day_desc = "under a clear starry night sky"
else: time_of_day_desc = "at night"
elif elevation < 6 and elevation >= -1: time_of_day_desc = "during sunrise or sunset, beautiful golden hour lighting"
elif elevation >= 60: time_of_day_desc = "under the bright midday sun"
else: time_of_day_desc = "during the day"
# Map Weather Condition to Scene/Action
if condition_lower == "sunny": weather_action_scene = "playing happily in the sunbeams, in a magical landscape"
elif condition_lower == "clear-night": weather_action_scene = "looking amazed at the stars"
# ... (Include all other elif conditions for weather here as in previous versions) ...
elif condition_lower == "cloudy": weather_action_scene = "sitting peacefully under overcast fluffy clouds"
elif condition_lower == "partlycloudy": weather_action_scene = "peeking playfully from behind a fluffy white cloud"
elif condition_lower == "rainy": weather_action_scene = "holding a tiny colorful umbrella during a light rain shower"
elif condition_lower == "pouring": weather_action_scene = "splashing happily in puddles during a heavy downpour (wearing tiny rain boots!)"
elif condition_lower == "snowy": weather_action_scene = "building a tiny snowman in a snowy winter scene"
elif condition_lower == "snowy-rainy": weather_action_scene = "looking confused at the mix of rain and snow falling"
elif condition_lower == "hail": weather_action_scene = "hiding quickly under a large mushroom during a hail storm"
elif condition_lower == "lightning": weather_action_scene = "watching distant lightning flashes from a safe, cozy spot"
elif condition_lower == "lightning-rainy": weather_action_scene = "watching safely from a cozy window during a thunderstorm"
elif condition_lower == "fog": weather_action_scene = "exploring cautiously through a thick magical fog"
elif condition_lower == "windy" or condition_lower == "windy-variant": weather_action_scene = "holding onto its tiny hat on a very windy day"
elif condition_lower == "exceptional": weather_action_scene = f"reacting surprised to exceptional weather conditions"
else: weather_action_scene = f"experiencing {condition} weather" # Fallback
# Combine prompt elements
generated_prompt = f"{subject} {weather_action_scene} {time_of_day_desc}. {base_style}. Square aspect ratio."
generated_prompt = generated_prompt.replace(" ", " ").strip()
log.info(f"PyScript: Dynamically generated prompt: {generated_prompt}")
except Exception as e:
log.error(f"PyScript: Error during dynamic prompt generation: {e}")
generated_prompt = f"{subject} in {condition} weather. {base_style}. Square aspect ratio." # Fallback
# --- Prepare API Call Details ---
model_name = "gemini-2.0-flash-exp-image-generation"; action = "streamGenerateContent"
api_endpoint = f"https://generativelanguage.googleapis.com/v1beta/models/{model_name}:{action}?key={api_key}"
headers = {'Content-Type': 'application/json'}
payload = { "contents": [ {"role": "user", "parts": [ {"text": generated_prompt} ] } ], "generationConfig": { "responseModalities": ["IMAGE", "TEXT"] } }
# --- Call Gemini API & Parse Response ---
response_data = None
try:
log.info(f"PyScript: Sending request to Gemini API ({action})...")
response = task.executor(requests.post, api_endpoint, headers=headers, json=payload, timeout=120)
response.raise_for_status(); full_response_text = response.text
log.debug(f"PyScript DEBUG: Raw API response text: {full_response_text}")
try: # Parse array/object
response_data_list = json.loads(full_response_text)
if isinstance(response_data_list, list) and len(response_data_list) > 0: response_data = response_data_list[0]; log.info("PyScript: Parsed JSON array response.")
elif isinstance(response_data_list, dict): response_data = response_data_list; log.info("PyScript: Parsed JSON single object response.")
else: raise ValueError("Response is not a valid JSON array or object.")
except Exception as e: log.error(f"PyScript: Failed to parse JSON: {e}"); return
except Exception as e: log.error(f"PyScript: Error during API call: {e}", exc_info=True); return
# --- Extract, Decode Image Data ---
if response_data is None: log.error(f"PyScript: Invalid JSON data after parsing."); return
base64_image_data = None; image_bytes = None
try: # Extract Base64
if 'candidates' in response_data and len(response_data['candidates']) > 0:
candidate = response_data['candidates'][0]
if 'content' in candidate and 'parts' in candidate['content'] and len(candidate['content']['parts']) > 0:
for part in candidate['content']['parts']:
if 'inlineData' in part and isinstance(part['inlineData'], dict) and 'data' in part['inlineData']: base64_image_data = part['inlineData']['data']; log.info("PyScript: Base64 data extracted."); break
# Check if image was found or if API refused/errored
if base64_image_data: image_bytes = base64.b64decode(base64_image_data); log.info("PyScript: Base64 data decoded.")
else: # If no image data, log refusal/text and exit
log.warning("PyScript: No 'inlineData' found in response.")
try:
if 'promptFeedback' in response_data and 'blockReason' in response_data['promptFeedback']: log.error(f"PyScript: BLOCKED by safety filter: {response_data['promptFeedback']['blockReason']}")
elif 'error' in response_data: log.error(f"PyScript: API returned error: {response_data['error']}")
else: text_part = response_data['candidates'][0]['content']['parts'][0]['text']; log.warning(f"PyScript: API returned text: {text_part[:300]}...")
except Exception: log.warning(f"PyScript: Cannot parse text/error from response. Parsed data (partial): {str(response_data)[:1000]}")
return
except base64.binascii.Error as e: log.error(f"PyScript: Error decoding Base64: {e}"); return
except Exception as e: log.error(f"PyScript: Error during extraction/decoding: {e}", exc_info=True); return
# --- Save Image File ---
if image_bytes:
output_filename = "gemini_pyscript_image.png"; output_path = os.path.join("/config/www", output_filename)
log.info(f"PyScript: Saving image to {output_path} using io.open directly...")
try:
# Use standard Python io.open directly in the main thread
with io.open(output_path, "wb") as f:
f.write(image_bytes) # Write the image bytes
log.info(f"PyScript: Image saved successfully (using direct io.open).")
# Optionally notify Home Assistant UI
service.call('persistent_notification', 'create', title="Gemini Weather Image", message=f"Image for '{condition}' generated.")
except PermissionError as pe: log.error(f"PyScript: PERMISSION ERROR writing to {output_path}. Details: {pe}", exc_info=True)
except IOError as e: log.error(f"PyScript: I/O ERROR writing image file (io.open): {e}", exc_info=True)
except Exception as e: log.error(f"PyScript: Unexpected error writing file (io.open): {e}", exc_info=True)
else:
# This should not be reachable if the logic above is correct
log.error("PyScript: Logical error - image_bytes is None, cannot save file.")
- Save the file.
- Reload PyScript: Go to Developer Tools → Server Management → YAML configuration reloading → PyScript → Reload.
Step 2: Create the Cache-Busting Helper
This input_text helper ensures the Lovelace card updates reliably.
- Go to Settings → Devices & Services → Helpers tab.
- Click + CREATE HELPER.
- Choose Text.
- Name: Gemini Image Timestamp (or choose your own).
- Note the Entity ID created (e.g., input_text.gemini_image_timestamp). You’ll need it below.
- Click CREATE.
Step 3: Create the Automation
This automation triggers the PyScript service hourly and on weather changes.
- Go to Settings → Automations & Scenes → Automations tab.
- Click + CREATE AUTOMATION → Start with an empty automation.
- Click the 3-dots menu (top right) → Edit in YAML.
- Replace the entire content with the following YAML code.
- IMPORTANT - EDIT THESE VALUES:
- Replace weather.forecast_home (used 3 times) with your weather entity ID.
- Replace YOUR_GOOGLE_API_KEY_HERE with your actual API key string (copied from secrets.yaml or entered directly). Remember the security warning!
- Replace input_text.gemini_image_timestamp with the entity ID of the helper you created in Step 2.
# Automation YAML
alias: Generate Gemini Weather Image (Hourly + Change)
description: Updates the Gemini AI weather image via PyScript
# Run actions sequentially, dropping older triggers if previous run is still active
mode: queued
max_exceeded: silent
trigger:
# Trigger 1: Every hour at 1 minute past the hour (adjust trigger time as needed)
- platform: time_pattern
hours: "/1"
minutes: 1
seconds: 0
# Trigger 2: When the weather condition state changes
- platform: state
entity_id: weather.forecast_home #<-- YOUR WEATHER ENTITY ID HERE
condition: [] # No conditions
action:
# Action 1: Call the PyScript service to generate/save image
- service: pyscript.generate_gemini_image
data:
# Pass current weather state
condition: "{{ states('weather.forecast_home') }}" #<-- YOUR WEATHER ENTITY ID HERE
# Pass current temperature attribute
temperature: "{{ state_attr('weather.forecast_home', 'temperature') | float(0) }}" #<-- YOUR WEATHER ENTITY ID HERE
# Pass sun state for time of day context
sun_state: "{{ states('sun.sun') }}"
# Pass sun elevation for time of day context
sun_elevation: "{{ state_attr('sun.sun', 'elevation') | float(0) }}"
# WARNING: API Key in plain text - Less Secure!
# !!! REPLACE WITH YOUR ACTUAL API KEY !!!
api_key: "YOUR_GOOGLE_API_KEY_HERE"
# Action 2: Update the helper text to trigger cache busting in Lovelace
# This runs AFTER the pyscript service call attempts
- service: input_text.set_value
target:
# !!! USE YOUR HELPER'S ENTITY ID !!!
entity_id: input_text.gemini_image_timestamp
data:
# Set value to the current timestamp as an integer
value: "{{ now().timestamp() | int }}"
# Action 3: Force update the camera entity (Removed - not needed with button-card)
# We removed the camera_proxy method in favor of button-card + markdown img
- Click Save. Name the automation.
- Reload Automations: Go to Developer Tools → Server Management → YAML configuration reloading → Automations → Reload.
Step 4: Create the Lovelace Card
This uses custom:button-card from HACS to display the image dynamically (using the helper for cache busting) and handle the tap action correctly.
- Ensure button-card is installed via HACS (see Prerequisites).
- Go to your Lovelace dashboard, enter Edit Mode.
- Click + ADD CARD.
- Choose the “Manual” card type.
- Replace the content with the following YAML.
- IMPORTANT:
- Replace weather.forecast_home with your weather entity ID.
- Replace input_text.gemini_image_timestamp with your helper’s entity ID.
# Lovelace Card using custom:button-card
type: custom:button-card
# Entity used for the more-info tap action context
entity: weather.forecast_home #<-- YOUR WEATHER ENTITY ID
# Hide default button elements
show_name: false
show_icon: false
show_state: false
show_label: false
# Define the tap action for the whole card (opens more-info for the entity above)
tap_action:
action: more-info
# Style the card itself (remove padding, add border radius matching HA cards)
styles:
card:
- padding: "0px"
- border-radius: "var(--ha-card-border-radius, 12px)"
- overflow: "hidden" # Hide image corners if they don't fit the radius
# Use custom_fields to inject the HTML for the dynamic image
custom_fields:
# Define a field named 'img' (can be any name)
img: >
[[[
// Use button-card's JavaScript templating capability
// Get the state of the cache-busting helper entity
const cacheBuster = states['input_text.gemini_image_timestamp'].state; // <-- YOUR HELPER ENTITY ID
// Construct the img tag. Use Date.now() as a fallback cache value if helper state is briefly unavailable.
const cacheValue = cacheBuster || Date.now();
// The image path must match where PyScript saves the file (/config/www -> /local/)
// style ensures the image fills the card area
return `<img src="/local/gemini_pyscript_image.png?v=${cacheValue}" style="width: 100%; height: 100%; display: block; object-fit: cover;" alt="Weather Image">`;
]]]
- Click Save.
- Position the card and exit Edit Mode (Done).
Final Result & Next Steps
You should now have a fully functional, dynamic AI weather card! The automation triggers image generation based on time and weather changes, PyScript calls Gemini, the image is saved, the helper forces a cache update, and the button-card displays the latest image and handles clicks correctly.
Possible next steps:
- Fine-tune prompts: Adjust the subject, base_style, and weather_action_scene logic in the PyScript file for different artistic outcomes.
- Improve Error Handling: The script currently logs errors if Gemini refuses or fails. You could enhance the Lovelace card (perhaps using a conditional card or button-card state_filter) to show a default image or hide the card when the gemini_pyscript_image.png hasn’t been updated recently or if generation fails.
- Explore AppDaemon: If the API key security is a major concern, consider migrating this logic to AppDaemon for standard secret handling.
- Try Different Models: If the gemini-2.0-flash-exp… model’s safety filters are too restrictive, look for currently recommended stable image generation model names in the Google AI documentation and update the model_name and potentially the api_endpoint/payload in the script.
Note on Temperature Argument:
You might notice that the automation passes the current temperature to the pyscript.generate_gemini_image service, and the script accepts it as an argument (temperature). However, in the final prompt generation logic provided (generated_prompt = f"{subject} {weather_action_scene} {time_of_day_desc}. {base_style}. Square aspect ratio."), the temperature variable is not directly used to modify the text sent to the Gemini API.
It was kept intentionally to allow for future flexibility. You could easily modify the Python if/elif block to incorporate temperature nuances into your prompts (e.g., adding phrases like “on a cold windy day”, “during a hot sunny afternoon”, “bundled up warmly in the snow”) without needing to change the automation’s service call again. For now, it provides context that isn’t visually rendered or used in the default prompt generation.
Important Note on Gemini API Limits & Cost:
This tutorial utilizes the Google Generative Language API (specifically the generativelanguage.googleapis.com endpoint with the gemini-2.0-flash-exp-image-generation model during testing). The limits for generating images via this specific public API endpoint (e.g., requests per minute, images per day) using a standard API key are not clearly documented by Google at the time of writing, especially concerning free tiers or how usage relates to consumer subscriptions like Google One / Gemini Advanced.
While a Google One subscription provides access to Gemini Advanced features (often through the chatbot interface), it does not necessarily guarantee unlimited or extensive free API usage for all models, particularly computationally intensive tasks like image generation. I have such a subscription and I was able to generate images via the API key, but the exact free quota or rate limits remain unclear. Also with no subscriptions image generation is working… Official information found suggested limits might depend on usage frequency and intervals.
So users should be aware that:
- They might encounter undocumented rate limits (e.g., temporary errors after several requests) or quotas (e.g., images per day).
- Depending on the specific model used, API terms of service, and usage volume, costs might be incurred. This is more likely if using models explicitly designated as paid services, such as those typically accessed via the Vertex AI platform (like Imagen 3).
- It’s highly recommended to monitor your API usage and billing within your Google Cloud Console project associated with the API key.
- Consult the official Google Cloud pricing pages for the “Generative Language API” and “Vertex AI” for the most current information on potential costs and quotas.
Final Thoughts
This project was quite a journey, involving deep dives into PyScript’s execution environment, Gemini API behaviors (including its sometimes overzealous safety filters), and Lovelace caching intricacies. While the path wasn’t always straightforward, the final result – a dynamically updating AI-generated weather image – is quite rewarding.
I hope this detailed walkthrough sparks your interest and encourages you to try implementing this in your own Home Assistant setup! Your experience might differ based on your specific HA version, PyScript configuration, the Gemini models available to you, and especially the prompts you design. Experiment with the code, refine the prompts, and please share your results and any improvements with the Home Assistant community! Good luck!
Did I forget a screenshot?