Ohh nice!!! Im now running it on its own RPI.
What is latest version of DSRM Reader you use in your add-on?
The latest, DSMR-reader v4.11. It’s using the https://github.com/xirixiz/dsmr-reader-docker images.
Nice… Maybe i will switch… let me think about it.
Now using official installation from DRSM Reader itself
Hi There,
Can you assist how to do a full restore from a DSMR Reader backup file.
File is .GZ but when you extract you get a .SQL file
Tried in pgAdmin4 but im never used this tool…
To restore a backup you can follow the original instructions here: https://dsmr-reader.readthedocs.io/nl/v4/faq.html#how-do-i-restore-a-backup.
If you are using the TimescaleDB addon, you could to use your own machine to restore. You can Google how to do this: https://www.google.com/search?q=Restore+a+database+with+psql&oq=Restore+a+database+with+psql&aqs=chrome..69i57j0i19i22i30l3.663j0j4&sourceid=chrome&ie=UTF-8
To connect to the TimescaleDB from you machine you need to expose the portnumber in the TimescaleDb add-on. Correct me if i’m wrong, @Expaso?
Edit: Nice, seems it can be done directly on Home Assistant as well DSMR Reader Add-on for Home Assistant
What I did is follow:
- stop de DMSR Addon service
- Open pgAdmin4
- Create a database (for me dsmr)
- goto terminal
- goto /usr/share/hassio/share/
- Download a backup from old DSMR (using FTP Addon to copy the file to your HA)
- Unzip the GZ file
- docker exec -it addon_77b2833f_timescaledb bash
- su - postgres
- psql -d dsmr < /share/test/dsmrreader-postgresql-backup-Tuesday.sql
- Start DSMR Addon
I have my whole history back now in the new DSMR
(today will receive my RP4 8GB, will switch from 2GB to 8GB)
with many Addons now the 2GB become full… haha
In supervisor log i see this error:
21-02-11 19:22:09 WARNING (MainThread) [supervisor.addons.validate] Add-on config 'auto_uart' is deprecated, use 'uart'. Please report this to the maintainer of DSMR Datalogger
21-02-11 19:22:10 WARNING (MainThread) [supervisor.addons.validate] Add-on config 'auto_uart' is deprecated, use 'uart'. Please report this to the maintainer of DSMR Datalogger
Can I ignore this?
Correct, those messages are about the DSMR Datalogger addon i created a while ago (see my other thread). It’s an easy fix but don’t know if it impacts users with older HA installations. So I’m waiting a bit with the update. For the DSMR Reader addon it’s already implemented.
Big thank you for this integration!
I have DSMR Reader installed, not the datalogger
Yes but the messages are coming from the DSMR Datalogger addon. But i released 1.0.2 to fix this so you shouldn’t see them anymore .
Is there a option to reconnect DSMR to influxdb when it was offline.
Every night i backup my HA system and stop Addon of InfluxDB to prevent corrupt database.
After backup Addon start again. But DSMR does not reconnect again.
So every 04:00 at night the data is not pushed into InfluxDB
What is the question here?
Do you see any error messages why its not picking up the InfluxDB connection again?
I don’t see any errors…
Also restart DSMR but then its not push info to InfluxDB
(Im at my work, so cant login now into system)
I have to check later in DSMR if InfluxDB is still enabled… (It was yesterday)
Info from HA is pushed correct into InfluxDB
Interesting… I used wire-guard and connect to my home.
Went into DSMR Config. The whole Influx-db integration was not enabled.
It was yesterday. So something triggered it to disable it again…
Let me see what happen tomorrow when Influx-db will stop for the backup and restart after the backup
Maybe then again the integration inside DSMR is again disabled.
Hi There,
Backup InfluxdB and integration in DSMR:
Every night 4:00PM I backup my HA system. InfluxDB Add-on is stopped (prevent corrupt dB) before it is back-up. Later it start again. DSMR automatic disable the InfluxdB. I checked it and it was enabled before the backup this night. Now its disabled again.
Push old information into InfluxDB not work:
When I try this I get a error:
./manage.py dsmr_influxdb_export_all_readings --to-influx-database dsmrreader-export
./manage.py dsmr_influxdb_export_all_readings --to-influx-database dsmrreader-export --max-batches 100
bash-5.0# ./manage.py dsmr_influxdb_export_all_readings --to-influx-database dsmrreader-export
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/django/db/backends/base/base.py", line 219, in ensure_connection
self.connect()
File "/usr/local/lib/python3.9/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/django/db/backends/base/base.py", line 200, in connect
self.connection = self.get_new_connection(conn_params)
File "/usr/local/lib/python3.9/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/django/db/backends/postgresql/base.py", line 187, in get_new_connection
connection = Database.connect(**conn_params)
File "/usr/local/lib/python3.9/site-packages/psycopg2/__init__.py", line 127, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: could not translate host name "dsmrdb" to address: Name does not resolve
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/dsmr/./manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/usr/local/lib/python3.9/site-packages/django/core/management/__init__.py", line 401, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python3.9/site-packages/django/core/management/__init__.py", line 395, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python3.9/site-packages/django/core/management/base.py", line 330, in run_from_argv
self.execute(*args, **cmd_options)
File "/usr/local/lib/python3.9/site-packages/django/core/management/base.py", line 371, in execute
output = self.handle(*args, **options)
File "/dsmr/dsmr_influxdb/management/commands/dsmr_influxdb_export_all_readings.py", line 48, in handle
influxdb_settings = InfluxdbIntegrationSettings.get_solo()
File "/usr/local/lib/python3.9/site-packages/solo/models.py", line 55, in get_solo
obj, created = cls.objects.get_or_create(pk=cls.singleton_instance_id)
File "/usr/local/lib/python3.9/site-packages/django/db/models/manager.py", line 85, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/django/db/models/query.py", line 573, in get_or_create
return self.get(**kwargs), False
File "/usr/local/lib/python3.9/site-packages/django/db/models/query.py", line 425, in get
num = len(clone)
File "/usr/local/lib/python3.9/site-packages/django/db/models/query.py", line 269, in __len__
self._fetch_all()
File "/usr/local/lib/python3.9/site-packages/django/db/models/query.py", line 1308, in _fetch_all
self._result_cache = list(self._iterable_class(self))
File "/usr/local/lib/python3.9/site-packages/django/db/models/query.py", line 53, in __iter__
results = compiler.execute_sql(chunked_fetch=self.chunked_fetch, chunk_size=self.chunk_size)
File "/usr/local/lib/python3.9/site-packages/django/db/models/sql/compiler.py", line 1154, in execute_sql
cursor = self.connection.cursor()
File "/usr/local/lib/python3.9/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/django/db/backends/base/base.py", line 259, in cursor
return self._cursor()
File "/usr/local/lib/python3.9/site-packages/django/db/backends/base/base.py", line 235, in _cursor
self.ensure_connection()
File "/usr/local/lib/python3.9/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/django/db/backends/base/base.py", line 219, in ensure_connection
self.connect()
File "/usr/local/lib/python3.9/site-packages/django/db/utils.py", line 90, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/usr/local/lib/python3.9/site-packages/django/db/backends/base/base.py", line 219, in ensure_connection
self.connect()
File "/usr/local/lib/python3.9/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/django/db/backends/base/base.py", line 200, in connect
self.connection = self.get_new_connection(conn_params)
File "/usr/local/lib/python3.9/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/django/db/backends/postgresql/base.py", line 187, in get_new_connection
connection = Database.connect(**conn_params)
File "/usr/local/lib/python3.9/site-packages/psycopg2/__init__.py", line 127, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
django.db.utils.OperationalError: could not translate host name "dsmrdb" to address: Name does not resolve
In Addon config I see this:
DJANGO_DATABASE_HOST: 77b2833f-timescaled
When using Portainer and check the info I see this:
DJANGO_DATABASE_HOST: dsmrdb
The correct host must be: 77b2833f-timescaled
How can I solve this?
Trying to understand what your saying. What has the DJANGO_DATABASE_HOST to do with the InfluxDB problem? It’s unrelated right?
I will test the Influx integration this week. So configuring it, restarting the addon and checking if it still works.
Let see if.you can do those commands:
./manage.py dsmr_influxdb_export_all_readings --to-influx-database dsmrreader-export
./manage.py dsmr_influxdb_export_all_readings --to-influx-database dsmrreader-export --max-batches 100
Released:
2021-02-17: 0.0.4 Extended configuration options & Added arm32v6