You do have to take into account that what you are talking about is ONLY the data, and not how this data should be stored, and Not how the system/core and front-end have to call itā¦ so you canāt compare, if you donāt take in consideration how an sqlite vs relational-db works ā¦ i fully agree that there are stored āa-lotā , one can reduce this some by not calling every āentityā i.e. ( sensor.myawasome_temperature_sensor_on first_floor ), and not include āentitiesā from a device that you know youāll never use etc. and āmeasurementā and āunit_of_measurementā is hardly āredundantā in this sqlite-DB ā¦ and attributes, you donāt have much choices as itās individual āinformationā from specific-device/sensor, if you call this information āredundantā, you have to study āAttributesā closer, as there are āvariable-stringsā, āstatesā etc. etc. in the Attributes- therefore the TEXT -type
And i just did a ācompact dbā, which reduced it 1/3, i know we are going towards a bigger and bigger ādefaultā Installation, with alot more integrations, and therefore a bigger DB, there are Documentation in regards to this, Recorder etc. Tips on what to exclude, to reduce your installation and keeping it Healthy
A New default āExternalā DB is hardly a way forward for a product like HA, a Maria-DB wonāt make any different, in the end itās about how the user choose to Use the product ā¦ default it works for many, for others they HAVE TO follow up on the Upgrade-notes, and yes have to make additional decisions/settings to āRoll-Onā
PS: and Aron was talking about DB-Size vs Data-Size ā¦ itās simply wrong, and taken out of a very importent context DB-in-Aplication
Iām afraid you donāt really understand how data is stored in databases on a low level.
The backend issues an SQL query over a time range defined by timestamps. Thatās all there is on the call side. The problem is the way the data is stored (the data layout) and the subsequent processing into natively readable objects.
Of course it is. Your unit of measurement for a temperature sensor is not going to change from one sample point to the next. Storing it for every single sample is completely redundant, as are the attributes like friendly name, etc. This is high level structural data that will never change when acquiring data points. It needs to be stored in a separate table (or not at all, friendly name for example can be retrieved from the HA config instead) and indexed from the states.
Data size is directly correlated to DB size and vice-versa in this context. The size of the management data itself is insignificant compared to the size of the stored data, as long as the redundancies are removed. In SQL, every native type has a fixed size in a cell, except for varchars (which are being much too liberally used by HA).
i just checked my entities( potential events and states āspammerā )
about 300 ( if i use a ānaming strategiā that reduce the Name and Entity Name with lets say 15 letters each ) ( 300x30=9000 Bytes) every 10 seconds ( 54000 Bytes a minut ) here we are talking āRedundantā
There i agree, and donāt understand, as itās already registered in config-files, as well as device-name ā¦ thou then, if itās not stored in the event, then the underlying process have to read from these config- files ā¦ bump, +30% faster to read from db-blob ā¦ so we are back to āmaintenanceā of DB, if we want to have a fast responding UI and core
i see 3 varchers in the example, state( canāt be different, unless TEXT) , entity_id ( you want INT here ? ), Domain ( what was it now 3 was ) ā¦ yes, i dont understand what you want it to be instead
This has nothing to do with āhow db worksā. This is about the recorder component choosing how to store data in the DB by splitting it up into appropriate tables and interlinking them in such a way that static data gets only stored once while dynamic data is stored in a highly optimized cell layout linking back to the immutable data.
i know the configuration/setup of the db didnāt came out of the blue , and i has also hope on that someone, some day will offer his time and expertise to develop the DB into todayās circumstances/demands ( Maybe even to MY satisfaction/ needs and preferences)( i have lots of idea and ways i want it), but im not in a position to complain about it, and donāt really find it appropriate, as i have āoptionsā AS everyone else with just a little Computer knowledgeās ā¦ itsā just a click away, and scrolling a few docs, or dozen
PS: above would be Edit: 77 760 000 Bytes a day ( redundant_loooong_names ) ā¦ start thinking positive, thereās a-lot one can do, beside complaining
I still would like to āseeā that query in āactionā ā¦ for i.e. A Switch 1 Basement, with 4 āAttributeā ( Total Power Switch 1 Basement (ad-ele) , Current Switch 1 Basement, Voltage switch1 Basement, Power switch 1 Basement ) ā¦ and your āproposedā Database-Model, tables-setup, relation etc ā¦ and why not an estimated comparing with current model , performance-vice ( I.E (open/close)Total and, read from config-files etc,ā¦ but of-cause also your use-case, for average user in terms of administration etc. on a ādefault MariaDBā, if thatās your thought, ā¦ or ājust a new/rebuildā sqlite ā¦ im sure the "default image, will increase a-little( but we are there already( with all new Feature-Reqests/integrations/Product-support etc. ) implemented lately ( which i assume you also have taken in consideration in your judgement off current DBās expansion, (beside the new entities and automatons you added, during same time , and the fact that Update-note, and āHeads-upā , in regards that Recorder Records ALL eventās and State-Changes now, if not āToldā other-vice, and to be honest, and why not those āExtraā entries you create-in-db, regardless of DB model, when you lower you sensor to āspewā every second, instead of 3 or 5 )
PS: I do know how a table look like, have experience from Oracle, MySQL, and now also a-little on how Grafana/Influx wantās to be asked nice ( havenāt manged to create a āunfortunate Loopā ) ā¦ yet
I think in my absence everything I was going to reply to was already said. I only get time to visit here occasionally and mainly on weekends.
Just a few points I want to make reallyā¦
SQLite is a perfectly capable relational database engine, so everything said here regarding changing the database structure to proper relational tables to avoid storing redundant data applies to it as well.
Relational database are by their very nature less flexible. That is a large part of the the point, actually. Storing data within that rigid structure means storing efficiently, predictably and in formats the database engine is optimized to work with.
The HA database is nothing special. It really isnāt. It would be trivial to design a relational schema to efficiently hold the data in say, 15 minutes. Probably most reading this thread with some database experience have already done so in their head. (I have).
Relational database engines are optimized to perform best when data is respectful of native data types and broken down to its elements. It is very costly for a database engine to parse a TEXT field of JSON to extract a value (and yes, Iām aware that JSON types now exist in many DB engines like MariaDB, however this is not universal). It is trivial for the DB engine to simply link together a few tables via JOINS. Especially if they are indexed properly.
If anyone is curious about how data can be stored across multiple tables and then combined in a query via relations, hereās a great place to start:
There are actually a lot of decent tutorials out there (I just Googled) to get you working with a relational database and make the concepts much clearer. If you have Microsoft Access, Google up the famous āNorthwind sample databaseā or install it directly from Access templates. Itās a relational database which is designed as an example and used in many tutorials.
As Mentioned, there is a big difference in āRead Aboutā and Understanding ā¦ but if you āAlreadyā figured it out, i look forward to see Your proposed Model
To be honest, you might be waiting a long time. I havenāt created a database schema in anywhere except my head and donāt really see the need to put it down in this thread. Primarily because (I hope this does not come through as hostile, I am just lacking a better way to say it) I have nothing to prove. In a way, this is database design 101 and there is no need to prove the assertion that a relational database which eliminates the redundant data and stores data in native data types instead of just VARCHAR and TEXT would be orders of magnitude more efficient. It is is.
Considering how long the database has been an issue, despite database corruption, performance and size threads being a constant in this forum since I began with HA (sometime around version 0.40), fixing the database is of very low priority in development.
I understand, you have really thought it through, maybe you should have looked at various āAttributesā, tried to understand what those really are, so you knew before you started āthinkingā out a solution, and taken in consideration the impact in performance, and additional DB usage use(CPU, RAM) your āsolutionā would lead to.
I sincerely doubt a āDictionaryā in another table will solve anything, just make it worse, you will brake a-lot functionality ā¦ AND before you commend any further, please go through your āAttributesā , as i recommended you before, and my ākey-wordā was then āVariablesā ā¦ ( or maybe i should say āentitiesā ā¦ with state = Boolean, strings(bigger than Varcher) as-well as INT, with multiple āintegration-specificā values āunit_of_measurementsā Names etc. etc. ) ā¦ so every time a sensor update states/events(as now in a single āBLOBā) (open/close), this ādictionaryā table(with Rows corresponding to 5-10 times or even more, than entities, if you count device-tracker, etc. etc.) should be updated in all itās related "Attributes-Rows and corresponding Columns(which might have to be a few TEXT column also)
To be honest, I am āastonishedā over your āSolutionā
PS: Some people have problems with their DB, others not, for various reasons
With recent updates, HA has gone to a relational model for attributes and other parts of the database as was described in this thread by myself and others. The performance improvements and data size reduction are impressive.
For me backups are still useless. My HA instance is huge, database file has 6GB, no way to back it up and restore it the convenient way. What a bummer, I will take the long way by manually replacing the db files and hope for reliable backup process soon.