The current yamaha components allow the automatic integration of Yahama Receivers such as HTR-4065, RX-V473, RX-V573, RX-V673, RX-V773 and some others. The component is limited on devices built earlier than 2013.
So any modern Yamaha Receiver (all Music Case devices) are not supported.
This can be found in a changed api. The API is now not using port 80 any more (instead 49154) and in addtion uses another http://:49154/MediaRenderer/desc.xml file. A description for it can be found here: https://github.com/neutmute/neutmute.github.io/blob/master/_posts/blog/2016-04-04-Yamaha-Musiccast-Protocol.md
I second this.
For testing I have an YSP-5600, WX030 and WX010 available.
Can test on n470d. Results should be valid for n570d too.
What I did aleady:
Changing the port in the yamaha.py dors not resolve
Linking to the mediarender desc xml does not help either
Guess the structure has changed
Someone has already implemented into Python, not sure how helpful this is:
I have started to implement a musiccast meda_player component using the pyamaha library. Right now I am using the yamaha component as a base.
Is there any documentation how to create a media_player component?
How does the tts integration work?
Feel free to test this as a custom component:
https://gist.github.com/runningman84/5464ec4dc39b10efd10828ca7cef25d0
good work! seems to work all right!
zone-detection is not working properly yet, though, but i guess you know that and it’s to early in in development. (it detects both of my zones on a RX-V58, but does not display the proper names and status.)
Great.
Will do some testing in the next week, but thanks so far!
I can spend a brief look at the multi zone stuff. But I do not have a real test environment for that. In order to see a real progress here somebody with the right equipment should take a look at the code.
I would also like to have the ability to play tts. But the docs are quite thin here. I would need the HTTP commands which are needed to play server (DLNA) protocol. Another problem is that I do not know how to stream tts as DNLA.
The home assistant gui is also quite strange. I do not see a pause button only a stop button is displayed. I do not know the difference between pause and play pause… The play button is also not shown.
As you can see there are alot of open points here. Is anybody able to help me? I can put the code in a git branch…
Works like a charm on the WX-030 speaker as well!
Have you had any thoughts on multiroom support? Is it supported in the API?
Multiroom should be possible but I have to wait for this feature:
That’s great! Thanks!
Thanks @runningman84, working a treat.
For multi-room I’ve hacked together some shell commands if anyone who wants to get it working in the meantime. Only tested the last few days but seems to work fine.
shell_command.yaml
set_musiccast_link: 'bash /home/homeassistant/.homeassistant/set_musiccast_server.sh {{server}} {{client1}} {{client2}}'
set_musiccast_server.sh
#!/bin/bash
server=$1
client1=$2
client2=$3
curl -X POST \
http://$server/YamahaExtendedControl/v1/dist/setServerInfo \
-H 'cache-control: no-cache' \
-H 'content-type: application/json' \
-H 'postman-token: d2dc5682-22b6-1297-f028-d78161174a8e' \
-d '{ "group_id":"9A237BF5AB80ED3C7251DFF49825CA42", "zone":"main", "type":"add", "client_list":[ "'$client1'", "'$client2'" ] } '
curl -X POST \
http://$client1/YamahaExtendedControl/v1/dist/setClientInfo \
-H 'cache-control: no-cache' \
-H 'content-type: application/json' \
-H 'postman-token: dc14aeea-3b30-1c29-92d8-d6c5bed7e606' \
-d '{ "group_id":"9A237BF5AB80ED3C7251DFF49825CA42", "zone":[ "main", "zone2" ] } '
curl -X POST \
http://$client2/YamahaExtendedControl/v1/dist/setClientInfo \
-H 'cache-control: no-cache' \
-H 'content-type: application/json' \
-H 'postman-token: dc14aeea-3b30-1c29-92d8-d6c5bed7e606' \
-d '{ "group_id":"9A237BF5AB80ED3C7251DFF49825CA42", "zone":[ "main", "zone2" ] } '
curl -X GET \
http://$server/YamahaExtendedControl/v1/dist/startDistribution?num=2 \
-H 'cache-control: no-cache' \
-H 'postman-token: 93735dd9-87a7-ac7e-7963-5b82d4e3758f'
And to call from an automation provide server and and 1 or 2 clients:
- alias: Enable Bedroom MusicCast Alarm
trigger:
platform: state
entity_id: sensor.bathroom_motion
to: 'Detected'
action:
- service: shell_command.set_musiccast_link
data_template:
server: '{{"192.168.0.27"}}'
client1: '{{"192.168.0.4"}}'
client2: '{{"192.168.0.5"}}'
I started to develop a Media Player component for Yamaha MultiCast devices. Right now it supports setting devices On, Off, Mute, Volume, Play States, Input, Input-Select… more to come.
You can find my Home Assistant Custom Component here:
and the corresponding Python 3 Library lives here:
If anybody wants to join forces in helping to bring this to life, feel free!
Your code looks similar to my implementation. Maybe you can find out how to play any mp3 file which would be great for tts
Maybe you can find out how to play any mp3 file which would be great for tts
What do you imagine how this should work?
The musiccast Android app can play local media files. Maybe a packet capture can show the corresponding api calls.
Interesting, by local media files you mean files hosted on the Android device? I don’t own an Android device, so I can’t look into this. But if you find something, let me know.
yes for android this works fine. It is also possible to browse and play files from dnla servers.
Maybe android also creates a dnla server for musiccast I do not know yet.
Unfortunately my wlan ap crashes using tpcdump on it and my android phone is not rooted. It is quite difficult to capture the traffic in this case…