I’m just updating my pi-64 node and the main ha docker image is now over 2GB and made up of nearly 30 segments. This node is on a fairly slow internet circuit and an update takes in excess of 30 mins to run, Thus, I rarely update.
Does the dev team have a view on big HA should be allowed to grow before it becomes too big?
I do not think the devs really look into the resources needed to get the HA binary files.
It is the footprint on the resources when it is running that is important and in the view of the running situation then a 2Gb image is not that much.
Ever done a Windows update recently? Seen how much space the C:\WINDOWS\WinSxs folder consumes? Cry!
Unlimited bandwidth with lightning speed is the default setting for most developers. It is only when you are programming for remote space probes like to Mars where the turnaround time is about 16 minutes does it become a clearly focused issue. The Voyager 1 probe data return trip takes around 45 hours. You want to make sure your error correcting protocols are robust and your commands well tested!
Data compression for software updates already exists and is the default for most updates in the industry.
Balancing your need to keep up to date vs data availability is a balance only you can determine.
New devices with greater capacity are relentlessly rolled out. Old ones are superseded. Inevitable progress. You cannot hold back the tide.