I’ve been experimenting a bit with less/more RAM for my HAOS VM.
When I give it 1 or 1,5GB, upon startup, the RAM is around 70-80% in use. When I give it 4GB, initially it uses 3GB. But the next day I noticed it was using 3.5+ GB. Most of this is cache. The actual usage is around 700MB.
I assumed I would get a faster/snappier experience by giving it the extra RAM. But I don’t notice much difference between 1GB and 4GB performance. Opening graphs and stuff takes about the same amount of time.
Is there a benefit of giving HAOS more than 2GB of RAM? In my case, even 1,5GB seems to do the trick just fine. Only on 1GB it all crashes to hell when I start compiling with the ESPHOME addon. With 1,5GB, no problems and usage is stable around 60-70% (plus cache).
I’m running stock HAOS + ESPHome + MARIADB + InfluxDB + Graphana. Around 10 (hardware) devices in total.
i would use “recommended” 2GB, as you say ( in your configuration ) usage is around 700MB, calling DB, executing automatons etc, takes a little as-well, specially if you use Graphana(depending of amount of graphs) etc., and as you sounds like having crashes, when trying with 1GB( for what ever reasons you are trying that) i don’t think it’s i good idea “hoping” that 1.5 will prevent you from reaching the “top”
Edit: Do have in mind, NO system benefits from crashes, why even “looking” for this limit in regards to using sufficient ram ? , when your system(monitored on it’s host) is coming close to 50-70% in “idling” state, and your integrations and use of entities in performance/resource craving tasks, consider raising RAM to “whatever”
PS: if you have problems “Viewing” too many graphs or “filled to the max with entities” views, your problems is at your device, where viewing this in your browser … any system/flow aint faster than the slowest “point”, in a water-system it’s the “valves” or smallest pipe", in a network system, slowest “point”, whether thats on server-side(application/ports/bands) whether thats internal or externals, then ur wifi, if part of the chain( have its interferences/ limitations), then ( final source )getting to you view-device/monitor + through your browser
Edit: i did forget “Protocols” in above equation, but there are few other details, that i dont find reasons to go into, as it’s not so relevant in regards to using RAM, if it’s not on the device you use to view-in your-browser, that is
I’ve been experimenting because I’ve been running HAOS off a 1GB RPi3 without problems for months. The only problem that would occur is that it would sometimes crash while compiling esphome code. But only then.
I have InfluxDB and Grafana, but don’t use them yet. Probably it will need more RAM once I start having graphs and stuff. But for now it seems very happy with 1,5GB. Idling at 55-60% usage. I’ll probably give it 2GB just in case anyway.
But I was mainly wondering why 1GB and 4GB with 3+GB cached don’t seem to have a performance difference. Could it be I will notice a difference with the 3gb of cached data once I start using Grafana more?
i doubt it, to reach 3GB of cashed data, you need quite “A-LOT” going on in your system, and again
Read above in regards WHY you must likely wont notice any difference … connect a monitor to your RPi3, and you’ll notice a difference, and BTW i know nothing about RPi3, in regards to i.e CPU,internal busses etc, thou i do know that a SSD-disk is somehow Faster than SD-card, like USB3 is faster than usb2, … i know people buying a huge SSD-disk for back-ups/DB or Media-files etc and connect it to usb2, lets say , they do get disappointed … same as if there is a “device” somewhere in the “path” between RPi to “endpoint” that have a 100Mb( or 1gb interface, that is not configured correct) or even worse an “old” mobil on wifi
I was a bit unclear. I meant the 1GB vs 4GB difference on a VM (on a HP T630 ‘thin client’). The RPi3 has been decommissioned some time ago HAwise.
All the 4GB is filled up with cache. After startup, about 3GB is used. And after letting it run for a day or 2, it filled it all op with cache to around 3.5-3.6GB.
With the 1GB config, it has only 200-300MB of cache.
So my main question is really: what is being cached. And in which type/kind of usage would I notice any benefits from it.
Now you are really “unclear”, and you still don’t seems to understand the difference on RAM on the server versus your “experience” on your end device
I have no idea what this means
So you have not only change your hardware(for your server), you now change Topic, to " what is being cached, and what benefits do you get out of this "
Short answer is, I have no idea, what is being cached and in what stage/time, and whether this is what you/system actually need “access” to, at given time, if the info is not in RAM cache, it’s on your “storage”
If you Don’t have sufficient RAM i.e (Recommended) i think Wally pretty much rounded up this Topic
I would say most of the time the extra memory is just used for cache and could be removed, but a few updates of the HA core require an update of the database and that eats a lot of memory.
I am not sure what the lower limit is for this process or if it changes from update to update.
I do know that the one place where I do not want a crash for sure is during a database upgrade session.