I updated the master branch today! Did keep the postcard code in a preview_image service for now, not the play_media service. Unless this breaks expected behavior from a media player, it just felt safer not to have play and preview mix.
On a related note, am looking forward to this being in the next HA release:
It’ll let the Google Assistant media players have more functionality than on/off or modes. With that built in, we’d actually have more features with Google than Meural’s Alexa implementation.
I’ve made the repo HACS compatible, so if anyone reading this before did not want to try installing it manually: you can now add it as a custom repository in HACS. If this works well, I’ll see about getting it added as a default repo in HACS soon!
What about splitting it off is safer? It seems strange to make the behavior of this media player inconsistent with other media players like the chromecast. I also don’t see what problems it causes to add it to play_media.
Consider how users would have to use it - if I want to send an image from my doorbell to my tv using a chromecast and to my meural, I would have to use two different service calls in the automation as a result.
True, I had not considered the scenario where you’d want to automate multiple media players for the same image. Makes sense that this would be the most common use-case in automation. I’ll revert it back to your code.
I really do think that previewing as my Canvas does it (overrides temporarily) and playing as I’d expect a media player to normally do (plays an item for the full expected duration, unless explicitly stopped by user) are 2 different concepts, but then again maybe not enough to make using the integration unnecessarily complicated
No I agree with you, that is why I thought it would be better off in its separate service. There is no equivalent of what Meural does with previewing/postcard in the media_player domain. That is why I thought having it outside of the media_player domain, by making it a meural.preview_image service, would be a cleaner solution.
So it turns out, HACS/github don’t like it when you link to the repo with a trailing slash. I must have tested without, and then made a mistake copying it in while writing the README.
Adding the custom repo with just this should be fine:
And ha-meural has just been added as a default repository to HACS, so it’s no longer necessary to go the custom repository route anymore! That should make it a lot easier now.
Release 0.111 of HA supports exposing more media player functions to Google Assistant - Nabu Casa now lists OnOff, Modes, TransportControl, MediaState for the meural entity - so you can use more voice commands. Apart from switching playlists and turning the device on and off, you can now also use Google to pause/play and go to previous/next image on the Meural Canvas.
I’ve updated HA-meural to v0.1.3. This fixes a bug that kept the integration from launching if you used the local SD card galleries at startup. The integration would attempt to retrieve item information from the Meural server, but Meural has no item information for images on the SD card, and the unexpected exception thrown here would keep the integration from starting up.
Update available through HACS or manually through github!
I am contemplating one of these frame but waiting on webaddress or integration so I can show something like dakboard with a swipe motion great work though
Well, theoretically, if you had a way of automatically generating a PNG file with your calendar on it, there’s no reason why you couldn’t create an automation that regularly sends the new calendar PNG to the Meural. But I don’t think Meural will add support for such functions themselves, they’re really focused on the art frame functionality.
Oh I am chatting to uk head of press about this feature at the moment to try and get me in contact with one of the meural dev team to try to implement a gesture that can show a screen if only dakboard is allowed it would be a amazing function to add to the frame.
This is cool. One thing that I would like very much is to enable audio on the displayed content, much like what you get during a museum audio-guided tour. It looks like the ‘/remote/get_galleries_json/’ and ‘/remote/get_gallery_status_json/’ API would get you artist and title information, which then can be used to search for descriptions on the internet, like wikipedia, and use text to speech to convert that to audio.
AstralEQ, yep, it would be a truly nice idea. It would be awesome if more and more services would have integrations with digital art assets, and of course with top nft collections like on TopNFTCollections site, well-known database of unique NFTs. It seems that some NFTs have much higher price than some masterpiece of an actual art works.