Forked-daapd and media player services

Ok got around to thinking this through. I agree, a script to call the media player turn on/off service for each zone would handle the “grouping”. Better yet would be a scene of sorts. Scenes currently have the services/ability to save (snapshot) and then to restore the snapshot it’s as simple as calling the scene turn on (or activate) command.

So going back to how forked-daapd works, any input in terms of TTS has to be directed to the “main” media player. But based on which zones are turned on determines where it actually plays. So couldn’t that just be a script in HA of first snapshot the current setup (currently playing media/queue status, media players that are on and their volume), pause playback, turn on selected media players (zones) and set their volume, play TTS message to main media player (ignore volume unless specified in which case it sets all media players to that volume), then restore the setup and resume playback. That’s essentially what the TTS script I have for my Sonos does, the difference being I send the TTS to a main media player for forked-daapd.

From what I just described I don’t see a need for any additional service except a scene snapshot/restore. Most everything is handled by existing component services for media players. The only thing is how to handle a media player call without an accompanying volume set.

I run HA docker. I can install a custom component through the custom component folder, but I don’t think I can with pip. At least I’ve never done it through pip or used pip. I’m not too worried about the media browser side of things right now.

Sorry if I repeated anything, duplicated what was already said, or I’m just not getting it and way off base with what we’re talking about.

I think you just described the same thing I described.
The current TTS implementation does its own snapshotting internally for restoring later. The issue is that between snapshotting and restoring it globably replaces the current volume with whatever the default TTS volume is set to (default is 80%). Having a snapshot save/restore feature and being able to send the snapshot along with the TTS call should provide sufficient functionality.
A few more questions for you:

  1. A scene would be the same thing as a snapshot, right?
  2. Would we provide a limited number of slots or allow the snapshots/scenes to be saved with names?
  3. Should we expose those snapshots/scenes as “Sound modes” so we can switch between them using the media player interface?
    As for trying out the new component feature, don’t worry about it yet. After some initial feedback from @davidlb I can push the updated pip library and bump the version used by HA. Then testing the custom component won’t require a custom pip.
  1. Yes. I just used the existing ability to create scenes as an example.
  2. I don’t think it’s necessary to save the snapshot beyond the TTS call. I think saving the scene as a sound config would be a bonus feature. I was thinking more along the lines of save the config and hold it for restoring until something overwrites it.
  3. hmm haven’t thought of that one. The existing media player component on the front end just has features as it pertains to media. I guess I’d need to know a little more on how the sound modes would be displayed on the front end.

The snapshots would need to be saved to be recalled or sent to the TTS service. Say you have 8 zones but are currently only playing something on zones 1 and 2. The current TTS implementation already saves the existing player state as a snapshot before and restores it after the TTS is done. The issue is what configuration of zones to play for the TTS call itself. You might not want it to play on the exact same zones and volumes as is currently playing. In our example, maybe you want to play the TTS on all 8 zones. For that you’d need to have that information (the 8 zones and their associated volumes) stored as a snapshot.

Just for reference, I also have the 80% volume and delay problem with TTS. Other than that, everything seems to be working great.
I wanted to try forked-daap, because the current integration for Denon receivers does not allow direct TTS, going through forked-daap I can circumvent that.

The 80% volume is not a bug, it was designed that way. If you want to change the set level for now you can go into the integration options and change the TTS volume to your desired level. It’s still a fixed level across all the zones though. I think I will definitely change from this fixed TTS volume to a snapshot method to be more flexible going forward.
For the delay problem, can you tell me when the delay comes? Is it before the text is spoken or after? Does the system eventually return to normal? Do you happen to be changing the volume or other settings while the TTS call is processing?

thx @uvjustin for explaining the integration setting! ( I overlooked that )

So the delay is before the actual text is spoken. I enter the text in the Media Player TTS field and press the Play button, than it will take half a minute before the text is spoken. Afterwards the system is reset to normal, what is a good thing by the way :slight_smile:

Side note, I only have a reference in timing between the Forked-daap integration media player and the Chromecast integration media player. I do not have any other TTS capable devices at home.
So the Chromecast one is almost instant.
I would assume Forked-daap needs to take a bit more time, because it will stream to my receiver and it could be the receiver is still turned off of set to another channel. But it seems the amount more is way off.

Is there any documentation to all this? I would like to set up a simple TTS to HomePod, so that I can send alerts, when the alarm is pending and needs to be turned off? Glad to hear that all this worked made it into official Home Assistant core, but no data on how to use it.

1 Like

@davidlb @uvjustin can you explain where library:track:25522 is coming from? How do we actually play a file from our library in forked-daapd via home assistant?

1 Like

For anyone else, the way to play media is to look up the URI from the forked-daapd api. For example:

http://192.168.10.10:3689/api/library/albums/8029616960692122489/tracks where the number are the album id. That will return a list of tracks for that album. Then you can use that in a service call like this:

entity_id: media_player.forked_daapd_server
media_content_id: 'library:track:3'
media_content_type: music

@uvjustin I have the same problem where the volume is immediately set to 80 before the music plays.

Hi guys,

Not quite sure how to run the media player component. The examples I see are related to the media_content_id, but I don’t know where that is obtained.

Also, in the example above, there is reference to the library and the track, but not the album. How does the player know which track to play?

I’d like to queue up a radio stream. Anyone know where the media_content_id can be found for a a radio stream? It will play when I click on one of the radio stations I’ve set up under Music > Radio. Can I play that using the media player?

Also can entire albums be played, for example using something :album: instead of :track:?

Am at a bit of a loss to understand how the media player interacts with forked-daadp. I want to be able to script a track for playback.

From the api call I can see a track id from api/library/tracks/62 gives me:

{ "id": 62, "title": "Not Over Yet", "title_sort": "Not Over Yet", "artist": "Grace", "artist_sort": "Grace", "album": "Savage Meltdown Vol. 1", "album_sort": "Savage Meltdown Vol. 00001", "album_id": "6123511396315858213", "album_artist": "Gavin Campbell", "album_artist_sort": "Gavin Campbell", "album_artist_id": "1702076477525592772", "genre": "Electronica", "year": 1995, "track_number": 1, "disc_number": 1, "length_ms": 696064, "rating": 0, "play_count": 0, "skip_count": 0, "time_added": "2021-01-31T08:35:59Z", "seek_ms": 0, "type": "m4a", "samplerate": 44100, "bitrate": 275, "channels": 2, "media_kind": "music", "data_kind": "file", "path": "\/music\/itunes music\/Savage Meltdown Vol. 1\/01 Not Over Yet.m4a", "uri": "library:track:62", "artwork_url": "\/artwork\/item\/62" }

I gather the “uri” is used for track selection. Within developer tools / services I call media_payer.pay_media:

entity_id: media_player.forked_daapd_server
media_content_id: 'library:track:62'
media_content_type: music

I’ve tried this with and without “&” in the media_content_id field.

But the track does not play. Looking at the forled-daapd web-ui I see the player goes briefly to “shair” (assume that’s the shairport-sync pipe?). Then it resumes playing the previous track.

I am able to play play lists via the custom:mini-media-player lovelace card, so the connection seems to be working.

What am I missing? Is there another way to play a track? Say with MPD?

Any help appreciated.

The current version of the integration does not handle content_id and support only TTS. I created a fork for my use by removing TTS support and adding content_id management. If you have HACS, you can add this custom repo: https://github.com/davidlb/ha_forkeddaapd

1 Like

Great, I’ll check that out thanks.

Can’t it be added to the official integration?

@uvjustin is/was working on it to support TTS and manage content_id. I don’t know what is the status.

@uvjustin I definitely think this topic is worth reviving!

I ran into the same issue initially described by @davidlb , that is when calling .play_media causes activation of all zones, volume set to 80%. I realise this is not the original issue posted by @squirtbrnr, but in any case has been discussed in this thread in detail.

Now, I realize that @davidlb used his “quick and dirty” (his words :slight_smile: ) hack as a fork which he published, for which I am grateful, since this serves to solve the bug :+1:

However! why isn’t your own branch, which offers a more comprehensive solution (and also a browse_media functionality :exploding_head:) incorporated into core??

Personally, I’m not looking for browse-media functionality since my use of OwnTone is via pipe and stream URIs. So I guess if splitting this into two separate PRs would expedite its incorporation into core, it’s worth a thought. All in all, this would be very useful, and at least all the people involved in this thread would be interested in it!

Thank you for this great component :clap:

Sorry all, due to various reasons I was not able to keep up with this component. I’ve recently taken a look at it again and will try to update it over the coming month.

3 Likes

I’ve pushed a few changes to the following branch: uvjustin/home-assistant at update-forked-daapd (github.com). You can download the files from homeassistant/components/forked_daapd and place them in your local custom_components/forked_daapd folder. The initial changes include supporting the announce/enqueue service calls and updating the media browser functionality. Please try it out when you get a chance, and feel free to provide some feedback so I can further improve the component.

1 Like