Export entity names

Thanks!! One would think an export option would be available but this solved my problem

1 Like

hello @francisp

Thanks a lot for your script. It is working well. Quick question, is there a way to erase the Txt file before writing on it? Atm the script is appending data and for my purposes an erase/write process would be better.

Thanks again

You are a hero! Thanks!

The most cool way to get this overview. I just used File Editor to get the JSON. Before you can access it in the .config path you must configure File Editor to not hide files and folders with pattern .storage:

Screenshot 2023-08-21 104755

1 Like

I did it even easier :stuck_out_tongue:

Added HA as networkdrive with Samba Share and pull the JSON directly into Excel, now i have a tableview in excel and its always uptodate.

Thanks for your tip where to find it

Edit:

To import this JSON data into Excel, follow these steps:

  1. Open Excel: Start by opening Microsoft Excel.
  2. Access Power Query:
  • Go to the “Data” tab.
  • Click on “Get Data” > “From File” > “From JSON”.
  • Navigate to where you’ve stored the JSON file and select it.
  1. Transform Data (if necessary):
  • Excel will use Power Query to interpret the JSON file. Once loaded, Power Query Editor will open.
  • You may see the JSON organized into columns and rows. If the data is nested (like the “entities” array), you’ll need to expand this column by clicking the expand button next to the column header.
  • Choose the columns you wish to include or perform any transformations needed (like filtering, sorting, changing data types).
  1. Load Data:
  • After setting up your data, click “Close & Load” to load the transformed data into an Excel worksheet.
  1. Automatic Refresh Setup:
  • To set automatic data refreshes, go back to the “Data” tab.
  • Click on “Queries & Connections”.
  • Right-click your query in the side pane and select “Properties”.
  • In the “Query Properties” dialog, enable “Refresh data when opening the file”.
  • Optionally, set a refresh interval under “Refresh every X minutes” to update data at regular intervals.
2 Likes

I’ve work on this script (parsing the core.entity_registry):

#!/bin/bash

# Input JSON file
input_file="core.entity_registry.json"

# Output CSV files
output_entities="entities.csv"
output_deleted_entities="deleted_entities.csv"
output_entities_filtered="entities_filtered.csv"
output_deleted_entities_filtered="deleted_entities_filtered.csv"

# Check if jq is installed
if ! command -v jq &> /dev/null
then
    echo "jq could not be found. Please install jq to use this script."
    exit
fi

# Function to convert JSON array to CSV
convert_to_csv() {
    local json_array="$1"
    local output_file="$2"
    local filter="$3"

    if [ -z "$filter" ]; then
        # Extract headers from the first object in the JSON array
        headers=$(jq -r "$json_array[0] | keys_unsorted | @csv" "$input_file")

        # Extract the data for each object
        data=$(jq -r "$json_array[] | map(tostring) | @csv" "$input_file")
    else
        # Extract headers from the first object in the JSON array based on filter
        headers=$(jq -r "$json_array[0] | {device_id, entity_id, id, original_name, platform} | keys_unsorted | @csv" "$input_file")

        # Extract the filtered data for each object
        data=$(jq -r "$json_array[] | {device_id, entity_id, id, original_name, platform} | map(tostring) | @csv" "$input_file")
    fi

    # Write headers to the CSV file
    echo "$headers" > "$output_file"

    # Append data to the CSV file
    echo "$data" >> "$output_file"
}

# Convert "entities" to CSV
convert_to_csv '.data.entities' "$output_entities"

# Convert "deleted_entities" to CSV
convert_to_csv '.data.deleted_entities' "$output_deleted_entities"

# Convert "entities" to filtered CSV with selected columns
convert_to_csv '.data.entities' "$output_entities_filtered" "filtered"

# Convert "deleted_entities" to filtered CSV with selected columns
convert_to_csv '.data.deleted_entities' "$output_deleted_entities_filtered" "filtered"


# Base directory for the output
base_dir="hass"

# Check if jq is installed
if ! command -v jq &> /dev/null
then
    echo "jq could not be found. Please install jq to use this script."
    exit
fi

# Create the base directory if it doesn't exist
mkdir -p "$base_dir"

# Get the list of unique platforms
platforms=$(jq -r '.data.entities[] | .platform' "$input_file" | sort | uniq)

# Loop over each platform
for platform in $platforms
do
    # Create a directory for the platform
    platform_dir="$base_dir/$platform"
    mkdir -p "$platform_dir"

    # Get the list of unique types based on entity_id (before the first dot)
    types=$(jq -r --arg platform "$platform" '.data.entities[] | select(.platform == $platform and (.entity_id | type == "string")) | .entity_id | split(".")[0]' "$input_file" | sort | uniq)

    # Loop over each type to create a CSV
    for type in $types
    do
        # Output CSV file for the current type
        output_file="$platform_dir/${type}.csv"

        # Extract headers for the filtered columns
        headers=$(echo "device_id,entity_id,id,original_name,platform")

        # Extract data for the current type
        data=$(jq -r --arg platform "$platform" --arg type "$type" '.data.entities[] | select(.platform == $platform and (.entity_id | type == "string") and (.entity_id | startswith($type))) | {device_id, entity_id, id, original_name, platform} | map(tostring) | @csv' "$input_file")

        # Write headers to the CSV file
        echo "$headers" > "$output_file"

        # Append data to the CSV file
        echo "$data" >> "$output_file"

        echo "Generated $output_file"
    done
done

echo "CSV files have been organized by platform and type."
echo "Conversion completed. The CSV files have been saved."

At the end, you’ll have 4 csv’s and one path hass with entities grouped by platform, then by “type”.

1 Like

Brilliant! Thanks for sharing this, much easier to track using Power Query and easy reference for all my entities!

I took a less sophisticated approach…simply copy+paste into Excel, then Text-to-Columns, then I deleted the columns I didn’t need.

I did a Copy+Paste from Excel into a file in Pulsar (text editor).
And now since Pulsar knows all of my entities, I can edit YAMLs much faster than always navigating to Studio Code Server addon.

True, I need to maintain the list but for now this was a good starting point to continue working on a dashboard.