Export entity names

Cool. I’ll try that. Before I do… if it goes FUBAR, is there a way to restart HASS without just pulling the plug on the Pi if I don’t have access to the UI? SSH seems to be disabled.

If you are running Home Assistant OS or Home Assistant Supervised, just install the Web Terminal and SSH add-on. Just don’t forget to enable port 22 :

2 Likes

Perfect! That’s done the trick, thank you!

I would also like a list of entities. Where do I place that information?

edit: I figure it out - it goes under notify: in the configuration.yaml as per File - Home Assistant

So I had it set as

notify:
  - platform: file
    name: entity_log
    filename: /config/www/entity_log.txt
    timestamp: false
1 Like

This has stopped working for me, anyone else?

I’m getting this when tying to call the service…

Failed to call service notify/entity_log. required key not provided @ data['message']

I used this command in the developer tools and it worked with no error.

1 Like

This script will work without the error.

print_entities_to_file:
  alias: Print Entities To File
  sequence:
  - service: notify.entity_log
    data:
      message: "{% for state in states %}\n  - {{- state.entity_id -}}\n{% endfor\
        \ %}\n"
  mode: single
1 Like

That worked, thanks!

Hi,
I’m trying to export the info that you can find when clicking under “entities” in configuration.

In other words, a table file with
Name, Entity_id, Integration, Area, Status

Sadly that web page is not exportable, or selectable, not even printable.

Is that possible?

I cannot find how to call the “integration” info such as “Deconz”, “Tuya”, etc.

{% for state in states %}
{{ state.domain, state.entity_id, state.object_id, state.name, }}
{%- endfor -%}

Edit: after discussing on Discord, it seems this is simply not possible using tempaltes as the data is not available there.

I solved my issue with the next trail below

1 Like

I’m almost scared to explain how I did this in 30 seconds (since I didn’t use code but brute forced it somehow)… but

  • Open the file \config.storage\core.entity_registry (I’m on rpi4 / Hassio supervised - did this via Samba)
  • Copy all the text and pasted in this random json to table converter

Voilà. A beautiful table with everything posiible :slight_smile:

Edit: This is the list of data field available in this file:

entity_id
config_entry_id
device_id
area_id
unique_id
platform
name
icon
disabled_by
supported_features
device_class
unit_of_measurement
original_name
original_icon
capabilities/hvac_modes/0
capabilities/hvac_modes/1
capabilities/max_temp
capabilities/min_temp
capabilities/preset_modes/0
capabilities/preset_modes/1
capabilities/preset_modes/2
capabilities/preset_modes/3
capabilities/preset_modes/4
capabilities/source_list/0
capabilities/source_list/1
capabilities/min_mireds
capabilities/max_mireds
capabilities/effect_list/0
capabilities

18 Likes

This is fantastic, just what I was looking for. Thanks so much!

This worked so good, I want to do the same for devices rather than entities. I tried changing this around like so:

In the config:

notify:
  - platform: file
    name: device_log
    filename: /config/www/device_log.txt
    timestamp: false

script:

print_devices_to_file:
  alias: '2.) Print Devices To File'
  sequence:
  - service: notify.device_log
    data_template:
      message: >
        {% for state in states %}
          - {{- state.device_id -}}
        {% endfor %}

put all I received was a list of dashes. I am assuming this section:

{% for state in states %}
  - {{- state.device_id -}}
{% endfor %}

is incorrect but not sure what to use instead? Can anyone assist?

There is a much simpler solution.

Simply navigate to Dev tools>> Template and paste below

        {% for state in states %}
          - {{- state.entity_id -}}
          ^ {{- state.name -}}
        {% endfor %}

The resukt will be shown at right side. You can copy paste into excel and use text to columns to split using ^ symbol for ID and Name and find replace to remove leading - on the entity. Boom - you will have a neat excel in less than a minute

10 Likes

Another option…

With ssh set up its straightforward to access the entities from a terminal on your PC, at home I’m using Linux (Fedora) - any Linux would do. For example:

kiat@fedora:~ ?} ssh ha "cat /config/.storage/core.entity_registry | awk '\$1 ~ /entity_id/ {print \$2}' | sed 's/\(\"\|,\)//g' | sort | egrep ^switch" | tail
switch.wled_island_sync_send
switch.wled_porch_nightlight
switch.wled_porch_reverse
switch.wled_porch_sync_receive
switch.wled_porch_sync_send
switch.wled_utility_nightlight
switch.wled_utility_reverse
switch.wled_utility_sync_receive
switch.wled_utility_sync_send
switch.zbbridge

and

{kiat@fedora:~ ?} ssh ha "cat /config/.storage/core.entity_registry | awk '\$1 ~ /entity_id/ {print \$2}' | sed 's/\(\"\|,\)//g' | sort | egrep ^binary_sensor" | tail
binary_sensor.watersensor_wc_water_leak
binary_sensor.wled_bench_firmware
binary_sensor.wled_hall_firmware
binary_sensor.wled_island_firmware
binary_sensor.wled_porch_firmware
binary_sensor.wled_utility_firmware
binary_sensor.yard_all_occupancy
binary_sensor.yard_person_occupancy
binary_sensor.zigbee2mqtt_running
binary_sensor.zigbee2mqtt_update_available

where I’m using “tail” to truncate, but any prefered output of data is available by tweaking the “awk” and “sed” commands.

Thanks!! One would think an export option would be available but this solved my problem

1 Like

hello @francisp

Thanks a lot for your script. It is working well. Quick question, is there a way to erase the Txt file before writing on it? Atm the script is appending data and for my purposes an erase/write process would be better.

Thanks again

You are a hero! Thanks!

The most cool way to get this overview. I just used File Editor to get the JSON. Before you can access it in the .config path you must configure File Editor to not hide files and folders with pattern .storage:

Screenshot 2023-08-21 104755

1 Like

I did it even easier :stuck_out_tongue:

Added HA as networkdrive with Samba Share and pull the JSON directly into Excel, now i have a tableview in excel and its always uptodate.

Thanks for your tip where to find it

Edit:

To import this JSON data into Excel, follow these steps:

  1. Open Excel: Start by opening Microsoft Excel.
  2. Access Power Query:
  • Go to the “Data” tab.
  • Click on “Get Data” > “From File” > “From JSON”.
  • Navigate to where you’ve stored the JSON file and select it.
  1. Transform Data (if necessary):
  • Excel will use Power Query to interpret the JSON file. Once loaded, Power Query Editor will open.
  • You may see the JSON organized into columns and rows. If the data is nested (like the “entities” array), you’ll need to expand this column by clicking the expand button next to the column header.
  • Choose the columns you wish to include or perform any transformations needed (like filtering, sorting, changing data types).
  1. Load Data:
  • After setting up your data, click “Close & Load” to load the transformed data into an Excel worksheet.
  1. Automatic Refresh Setup:
  • To set automatic data refreshes, go back to the “Data” tab.
  • Click on “Queries & Connections”.
  • Right-click your query in the side pane and select “Properties”.
  • In the “Query Properties” dialog, enable “Refresh data when opening the file”.
  • Optionally, set a refresh interval under “Refresh every X minutes” to update data at regular intervals.

I’ve work on this script (parsing the core.entity_registry):

#!/bin/bash

# Input JSON file
input_file="core.entity_registry.json"

# Output CSV files
output_entities="entities.csv"
output_deleted_entities="deleted_entities.csv"
output_entities_filtered="entities_filtered.csv"
output_deleted_entities_filtered="deleted_entities_filtered.csv"

# Check if jq is installed
if ! command -v jq &> /dev/null
then
    echo "jq could not be found. Please install jq to use this script."
    exit
fi

# Function to convert JSON array to CSV
convert_to_csv() {
    local json_array="$1"
    local output_file="$2"
    local filter="$3"

    if [ -z "$filter" ]; then
        # Extract headers from the first object in the JSON array
        headers=$(jq -r "$json_array[0] | keys_unsorted | @csv" "$input_file")

        # Extract the data for each object
        data=$(jq -r "$json_array[] | map(tostring) | @csv" "$input_file")
    else
        # Extract headers from the first object in the JSON array based on filter
        headers=$(jq -r "$json_array[0] | {device_id, entity_id, id, original_name, platform} | keys_unsorted | @csv" "$input_file")

        # Extract the filtered data for each object
        data=$(jq -r "$json_array[] | {device_id, entity_id, id, original_name, platform} | map(tostring) | @csv" "$input_file")
    fi

    # Write headers to the CSV file
    echo "$headers" > "$output_file"

    # Append data to the CSV file
    echo "$data" >> "$output_file"
}

# Convert "entities" to CSV
convert_to_csv '.data.entities' "$output_entities"

# Convert "deleted_entities" to CSV
convert_to_csv '.data.deleted_entities' "$output_deleted_entities"

# Convert "entities" to filtered CSV with selected columns
convert_to_csv '.data.entities' "$output_entities_filtered" "filtered"

# Convert "deleted_entities" to filtered CSV with selected columns
convert_to_csv '.data.deleted_entities' "$output_deleted_entities_filtered" "filtered"


# Base directory for the output
base_dir="hass"

# Check if jq is installed
if ! command -v jq &> /dev/null
then
    echo "jq could not be found. Please install jq to use this script."
    exit
fi

# Create the base directory if it doesn't exist
mkdir -p "$base_dir"

# Get the list of unique platforms
platforms=$(jq -r '.data.entities[] | .platform' "$input_file" | sort | uniq)

# Loop over each platform
for platform in $platforms
do
    # Create a directory for the platform
    platform_dir="$base_dir/$platform"
    mkdir -p "$platform_dir"

    # Get the list of unique types based on entity_id (before the first dot)
    types=$(jq -r --arg platform "$platform" '.data.entities[] | select(.platform == $platform and (.entity_id | type == "string")) | .entity_id | split(".")[0]' "$input_file" | sort | uniq)

    # Loop over each type to create a CSV
    for type in $types
    do
        # Output CSV file for the current type
        output_file="$platform_dir/${type}.csv"

        # Extract headers for the filtered columns
        headers=$(echo "device_id,entity_id,id,original_name,platform")

        # Extract data for the current type
        data=$(jq -r --arg platform "$platform" --arg type "$type" '.data.entities[] | select(.platform == $platform and (.entity_id | type == "string") and (.entity_id | startswith($type))) | {device_id, entity_id, id, original_name, platform} | map(tostring) | @csv' "$input_file")

        # Write headers to the CSV file
        echo "$headers" > "$output_file"

        # Append data to the CSV file
        echo "$data" >> "$output_file"

        echo "Generated $output_file"
    done
done

echo "CSV files have been organized by platform and type."
echo "Conversion completed. The CSV files have been saved."

At the end, you’ll have 4 csv’s and one path hass with entities grouped by platform, then by “type”.