I had this in my mind H/W codec I would have thought the pi would have been enabled for H/W playback, the hardware is there, guess it’s because hassio is some kind of VM. Still my pentium quad in the NAS should be more man enough (it can transcode 1080p) but guessing not. Thanks for confirming my thoughts
How do I hack the rtsp script file to output video 640x360@15FPS
Hi Guys , need help with mqtt configuration
get message from HA from info “Invalid config for [camera.mqtt]: required key not provided @ data[‘topic’]. Got None. (See ?, line ?). Please check the docs at https://home-assistant.io/components/camera.mqtt/”
attaching my mqtt.conf and configuration.yaml file
Hoping someone can help me out with a problem I am experiencing. I have a Wyze V2 camera that has been successfully updated with V0.26 of the OpenIPC firmware. I have been able to successfully integrate the camera into home assistant with MQTT and can display a video feed from VLC using rtsp://192.168.1.219:8554/unicast. I also get snapshots displayed every 10 seconds or so when I add the camera into one of my groups in home assistant. The problem is that I cannot create a snapshot from a script or automation - I always end up with a zero length file and the following error information in the log file:
2018-05-22 16:25:41 ERROR (MainThread) [homeassistant.core] Error executing service <ServiceCall camera.snapshot: entity_id=[‘camera.dafang_1_motion_snapshot’], filename=<homeassistant.helpers.template.Template object at 0x6fb5ea30>>
Traceback (most recent call last):
File “/usr/lib/python3.6/site-packages/homeassistant/core.py”, line 1002, in _event_to_service_call
await service_handler.func(service_call)
File “/usr/lib/python3.6/site-packages/homeassistant/components/camera/init.py”, line 202, in async_handle_snapshot_service
_write_image, snapshot_file, image)
File “/usr/lib/python3.6/concurrent/futures/thread.py”, line 56, in run
result = self.fn(*self.args, **self.kwargs)
File “/usr/lib/python3.6/site-packages/homeassistant/components/camera/init.py”, line 198, in _write_image
img_file.write(image_data)
TypeError: a bytes-like object is required, not ‘NoneType’
did you start the mqtt-status BEFORE starting mqtt-control? I had the same issue but after looking at the PID of the example webpage screenshots I realized the mqtt-status should be started first.
For the last days I am trying to send the currentpic.cgi picture with telegram. Not successful yet.
Was someone successful to save the picture to the hassio SD Card or send it directly with telegram at: https://[IP-Address]/cgi-bin/currentpic.cgi ?
At the moment I take a snapshot from the video stream, save it to the hassio SD card and send it with telegram, but the picture quality is not very high.
This is my setup at the moment:
I do have the same issue as some others here.
The settings, like turn off the stream is not going through the cam via MQTT.
I use the discovery function etc.
Also if I change at the cam side, I can see, after those 30 seconds, the state change, but it does not work the other way around.
Hass.io(docker version) with the mqtt add-on installed.
It looks like it is only working one way, but I would love to get it to work in booth ways
Unfortunately i am not that familiar with mqtt, so I am no sure if the command or status change to turn the stream off is ever reaching the cam.
it is important to start the mqtt service BEFORE start the mqtt control, otherwise it will not work.
I do have booth to automatic startup, then do a restart and it should work again.
Are the sensors there ?
I mean you still have to create them in the view (if you are not using the default view anymore) otherwise the sensors are there but not on the GUI
Ok, I got the sensors in HASS (the LED and RTSP) but they all show up unknows status.mqtt is working in HASS (I see developer options) and mqtt data in camera file has the same username/password/etc.) Also mqtt-control shows ‘NOK’