2023.7: Responding services

You can… just make a custom event template sensor and pass the data to the attributes.

What about for people like me who don’t use USB capture?

I use a screen grabber as Hyperion runs on my TV box.

Same principles if it works the same way as usb capture. I.e. 3rd light

Is there a way of turning the gauge off please or adjusting how it calculates?

No. If you want a different calculation you will have to create your own template sensor and gauge.

Custom weather state descriptions, meteolalerts, local community feeds, etc provided not by integration but by REST sensors or scrapping (local garbage collection).
Case I’m working on right now; asking OpenAI for descriptive information about played artist/album/track to be displayed in custom media dashboard. BTW Taras, this is the same case we discussed here, I’m now trying to move it native HA solution, as @jjbankert ChatGPT integration is being discontinued.

You can still do this though, same principle in the blog post for the calendar event template sensor.

Sorry @petro, I do not get it… I tried, but I do not understand how to use variable data from service call to set attribute… For state it would work, but it way longer than 255 characters.

So here is the service call make:

service: conversation.process
data:
  agent_id: 9794fb3fee4a1e2220e2f47a524fce93
  text: Tell me about Pink Floyd's album Animals in less than 100 words
response_variable: openai_test_response

and response received:

response:
  speech:
    plain:
      speech: >-
        Pink Floyd's album Animals, released in 1977, is a concept album that
        critiques society through the lens of animal metaphors. Divided into
        three tracks, "Pigs on the Wing" bookends the album, while "Dogs," "Pigs
        (Three Different Ones)," and "Sheep" form the core. The album explores
        themes of power, greed, and conformity, with each animal representing
        different aspects of society. The music combines progressive rock with
        elements of blues and psychedelic rock, featuring intricate guitar work,
        atmospheric keyboards, and thought-provoking lyrics. Animals is regarded
        as one of Pink Floyd's most politically charged and musically ambitious
        albums.
      extra_data: null
  card: {}
  language: en-GB
  response_type: action_done
  data:
    targets: []
    success: []
    failed: []
conversation_id: 01H4TW6ZBB0ZF7P797EMM2DDXF

What comes next?

after,

service: conversation.process
data:
  agent_id: 9194fb3fee4a1e2220e2f47a524fce92
  text: Tell me about Pink Floyd's album Animals in less than 100 words
response_variable: openai_test_response

add a custom event

- event: openai_response
  event_data:
    response: "{{ openai_test_response.response.speech.plain.speech  }}"

Then make a template sensor

template:
- trigger:
  - platform: event
    event_type: openai_response
  sensor:
  - name: Response
    device_class: timestamp
    state: "{{ now() }}"
    attributes:
      response: "{{ trigger.event.data.response }}"
4 Likes

There are currently two service calls that produce a response_variable. How are you planning to use them for the applications you listed? I’m interested to learn how you will be employing this new functionality.


Regarding the example you posted, what initiates this service call?

service: conversation.process
data:
  agent_id: 9194fb3fee4a1e2220e2f47a524fce92
  text: Tell me about Pink Floyd's album Animals in less than 100 words
response_variable: openai_test_response

I’m loving this release! I’ve been playing around with ChatGPT responses today and it’s working great for me, as is the calendar.list_events service.

I have run into one limitation though; that the calendar.list_events method won’t return a response if there are multiple calendars entities passed to it.

Failed to call service calendar.list_events. Service call requested response data but matched more than one entity

I’d love to be able to query multiple calendars at once to get a combined agenda for my day; for example, my work calendar, personal calendar, and the one I share with my partner. Is this something on the roadmap for this service, or is it only possible to support one calendar at a time?

just process them all with different variable names and combine the data after the last.

Same for me. Since the last update I have an issue with Mariadb. Is there any breaking change I missed?

Hm… one step closer, seems that as @petro suggested adding custom event section might be solution… but yet automation does not register, with conversation service call error. Here is the full code for this automation (it is a bit more complex, as it serves several different media_players at the same time, so I had to use some additional variables):

  - id: 'update_media_info'
    alias: Update Media Info
    initial_state: true
    variables:
      player: >-
        {{ trigger.entity_id }}
      artist: >-
        {% set string=trigger.entity_id %}
        {% set result=state_attr(string, 'media_artist') %}
        {{ result }}
      album: >-
        {% set string=trigger.entity_id %}
        {% set result=state_attr(string, 'media_album_name') %}
        {{ result }}
      output: >-
        {% set string=trigger.entity_id %}
        {% set string=string.split('.') %}
        {% set result="input_text." + string[1] + "_media_info" %}
        {{ result }}
    trigger:
    - platform: state
      entity_id:
        - media_player.audiocast
        - media_player.denon_heos_s750h
        - media_player.marantz_sacd30n
        - media_player.volumio_2
      attribute: media_album_name
    condition: []
    action:
      - service: conversation.process
        data:
          agent_id: 9194fb3fee4a1e2220e2f47a524fce92
          text: Tell me about {{ artist }}'s album {{ album }} in less than 100 words
        response_variable: chatgpt
        event: >-
          {{ trigger.entity_id }}_response
        event_data:
          response: "{{chatgpt.response.speech.plain.speech | trim | replace('\"','')}}"

Error I receive in the log seems to be related to actual service call, so this part:

    action:
      - service: conversation.process
        data:
          agent_id: 9794fb3fee4a1e2220e2f47a524fce93
          text: Tell me about {{ artist }}'s album {{ album }} in less than 100 words
        response_variable: chatgpt

And error itself:

2023-07-08 16:44:49.635 ERROR (MainThread) [homeassistant.components.automation] Automation with alias 'Update Media Info' could not be validated and has been disabled: extra keys not allowed @ data['action'][0]['data']. Got {'agent_id': '9194fb3fee4a1e2220e2f47a524fce92', 'text': "Tell me about {{ artist }}'s album {{ album }} in less than 100 words"}
extra keys not allowed @ data['action'][0]['response_variable']. Got 'chatgpt'
extra keys not allowed @ data['action'][0]['service']. Got 'conversation.process'

Configuration check does not indicate any errors. Error appears when the automation is executed.

Then the sensor to consume response looks like (for one of media players, volumio_2 inthis case):

template:
  - trigger:
      - platform: event
        event_type: media_player.volumio_2_response
      - platform: state
        entity_id:
          - media_player.volumio_2
        attribute: media_title
    sensor:
      - name: volumio_2_album_description
        state: 'on'
        attributes:
          album_description: >
            {% if trigger.platform == 'event' %}
              {{ trigger.event.data.response }}
            {% else %}
              {{ this.attributes.album_description | default('') }}
            {% endif %}   
          album_title: "{{ state_attr('media_player.volumio_2', 'media_album_name') }}"
          artist_name: "{{ state_attr('media_player.volumio_2', 'media_artist') }}"
          song_title: "{{ state_attr('media_player.volumio_2', 'media_title') }}"
          album_art: "{{ state_attr('media_player.volumio_2', 'entity_picture') }}"

But yet to be tested, as is not triggered due to error in automation…

you aren’t separating your conversation.process service call from the event call. add a - in front of event.

1 Like

That’s a pity, no idea how to do a gauge is there a card that does that?

Mind you, still wouldn’t be able to change the default energy dashboard.

Edit: never mind can see gauge card

OK, better now, automation does not drop any errors and executes… but seems event: cannot be templated… Here is output from traces for this automation:

Executed: 8 July 2023, 20:00:06
Result:
event: '{{ trigger.entity_id }}_response'
event_data:
  response: >-
    Cantoma's self-titled album, Cantoma, is a captivating musical journey that
    combines elements of world music, downtempo electronica, and Balearic beats.
    Released in 2003, the album features intricate layers of lush
    instrumentation, soothing melodies, and hypnotic rhythms. With its dreamy
    and atmospheric soundscapes, Cantoma creates a serene and exotic ambiance
    that transports listeners to faraway destinations. Each track on the album
    is meticulously crafted, showcasing Cantoma's talent for blending organic
    and electronic sounds seamlessly. From the enchanting vocals to the
    intricate production, Cantoma's self-titled album is a timeless and
    immersive musical experience.

It should contain event: media_player.player_name_response instead of event: {{ trigger.entity_id }}_response

Move the entity_id out of the event’s name and into the event’s data.

    action:
      - service: conversation.process
        data:
          agent_id: 9194fb3fee4a1e2220e2f47a524fce92
          text: "Tell me about {{ artist }}'s album {{ album }} in less than 100 words"
        response_variable: chatgpt
      - event: chatgpt_response
        event_data:
          entity_id: "{{ trigger.entity_id }}"
          response: "{{chatgpt.response.speech.plain.speech | trim | replace('\"','')}}"

Change the Event Trigger to listen for the “chatgpt_response” event and for a specific value of entity_id.

template:
  - trigger:
      - platform: event
        event_type: chatgpt_response
        event_data:
          entity_id: media_player.volumio_2
      - platform: state
        entity_id:
          - media_player.volumio_2
        attribute: media_title
    sensor:
      - name: volumio_2_album_description
       ... etc ...

more in general I would really appreciate a separate topic where we can share ideas and implementations of the new functionality, which I must confess, is still very fuzzy for me.

I don’t have many of those conversation agents installed (only use the Cloud Agent, and GA connected too HA), as I feel those are not adding to the experience just yet and still meet at lot of “I am sorry, I don;t understand” (who’s afraid of AI here…)

Maybe the new responding services, that have been marketed as the next big thing without a lot of further suggestions to go, on can bridge that gap.

Hope someone with a good and useful real life example can open a Topic posting it, and have others follow suit so we can all benefit from it.

btw, can we filter the ever so present Shelly errors

Error fetching shellyplug-s-A75005 data: Error fetching data: DeviceConnectionError()

for all devices which we’re seeing since this release with

  homeassistant.components.shelly:
    - DeviceConnectionError()

? I am never sure what part of that error we need to pick

Regarding the Shelly errors you can define a filter for the component throwing the error, e.g.:

logger:
  filters:
      homeassistant.components.shelly:
        - "Error fetching .* data:*"
2 Likes