Why use automatic SpeedTest/iPerf runs?

What purpose does people have for running these SpeedTest on the internet connections at specific times??

The output you get is just what that app can pull/push through the connection, but the network gear will make sure that other apps get their share of the connection too and since it is extremely hard to know what apps is running then it is also an extremely uncertain value you get out.

It is like saying you have to see how fast you can do 10 push-ups at exactly each hour.
If you are idle then you can do them fast, if you are eating then slower and if you are driving a car, then none at all.
But the times you get tells you nothing about the situation you where in when you later looks at the times and try to do some statistics on them, so your strength seems to jump from strong to dead to weak and so on.

Speedtests like that is meant to be run in an as minimal and controlled environment as possible for the value to be useful.

To me the only effect by doing these tests is to use power on your network gear, slow your internet connection for the apps that are actually doing something useful and putting a load on the SpeedTest servers that make it more expensive for the providers of the service to keep it running for those that a actually use it properly.

Not unless you enable it on you router, somehow. That’s called QOS and is generally not enabled by default.

Sure, no question about that. But it’s a pretty good indication of what your internet provider delivers vs. what it promises you.

At least half-a-dozen times, I noticed it didn’t deliver, rang and got back to acceptable levels of service. If you don’t test, you just don’t know, as any given download/upload is itself hugely impacted by the other party, while speedtests make sure you get the best possible counterpart.

My provider have an issue, due to very long cable distance, from time to time the router loose some of the upload streams, leaving app 1/3 of the normal upload speed. I test this once during the night.
If speed is slow, I switch off the providers router, wait 30 sec and switch on again.

I know QoS, but I do not think many would ever make a rule that would give their SpeedTest connection 100% bandwidth and it is also somewhat hard to configure, because the SpeedTest is often just normal traffic, so no real way to filter it out.

I just reply to your argument, here :wink:

Point is, exact values do not matter, only trends are interesting, here.
If you see a 20% drop for 2 days (I test every 6h), you know you have an issue, whatever download/upload might occur during that timeframe.

If you have not tested in a controlled environment, then you can not conclude anything and you can not conclude that what the ISP did solved anything.
They might have reset the connection and thereby killed the bug download of updates for some other device.
You might then just test fine in your SpeedTest again, but a few minutes later the big downloads of updates will restart.
All you got was a delay in the updates and maybe a restart from scratch because the downloads could not resume from the point of interruption.
You think you have improved something, but the downloads have just been moved to a time period where no test occur.

You have no clue what is the cause unless the environment is controlled.
You just blame the ISP and hope they can fix something.
It’s pseudo.

That one I can somewhat understand, but often the router provide metrics on the speed of ports over SNMP, including the internet connection.
This means it is possible to get values with putting an unnecessary heavy load on the connection which might actually cause the issue to occur that you want to detect and prevent.

Not sure why the hate :wink:

I know my environment. I know exactly how much bandwidth I can expect on average because I run speedtests 4 times a day, through HA, in exactly the same conditions (server, wired, ofc). You could call it controlled besides the fact that something else might be downloading/uploading, hence I ignore drops/peaks.

Don’t do it if you don’t believe it. I strongly believe it’s a pretty good indicator of my ISP quality of service.

A 40Mb download a heavy load? Talk about downloading Horizon from Steam (72Gb), that’s a heavy load :smiley:

Speed of ports has nothing to do with your Internet bandwidth, ofc. Whatever your router knows about the bandwidth, it would have to do a speedtest to know.

So you have turned of utomatic updates on all your devices and apps or set them all to fixed times outside the SpeedTest runs?
If not then you really have no clue.

Computer OSes download a big amount of data at times of updates and sometimes almost nothing.
Having a streaming service running will also take some bandwidth.

You might know what your optimal value should be, but not really why it is not that value at times.

Your SNMP values should provide you with the max speed configured for the port and values for load.
I use my SNMP readouts like that and can therefore get speed data continuously without putting extra load on the connection.
Often the load will not be max, so many values will be useless to determine max delivered bandwidth, but they will be there many times during a day.

  1. Look above for the notion of “trend”. The bandwidth down 20% for 2 days (8 runs)? I know it’s a bandwidth issue.
  2. I don’t have any auto-update enabled (I’m not crazy :slight_smile: )
  3. Look above for the fact that it already detected issues with my ISP
  1. I don’t care if the port between my router and my VDSL modem is 1GB/s if my max internet bandwidth is 100Mb/s :wink:
  2. Load, well, needs load. No load, no figures, so that’s hardly better on the “controlled” side, is it? Ofc, you can run a speedtest to produce… Oh wait :smiley:

I just seen lots of waste energy here.
Many run these tests automatically without understanding the limitations.
It’s an unnecessary load on the Internet connection and the servers, it’s money waste on electricity for these devices, it is time wasted on setting it up and it is time wasted on bug hunting an unnecessary process.

To me it is as useless as trying to determine the percentage of cloud coverage.
That value is somewhat useless too,because it does not tell you if the clouds cover the sun or if it is rain or storm clouds or other info needed to make it useful.

Man, you have an always running server with HA doing stuff people did manually for centuries, like all of us here. You likely have a shitload of useless sensors consuming energy as well. You even maybe watch Netflix/Disney+/whatever.

Don’t greenwash me, please :wink:

Oh, I agree. But I do…

Actually my biggest concern is that the service will be useless when I actually need it.
Already now some servers have issues with higher bandwidth connections, like 10gbit/s lines.
I like the service, bit I see this more like abuse of it. Or maybe it is misuse since people probably do not do it on purpose.

And yes I have some sensors I do not need, but they provide sensors I need and I am forced to get the others too.
Al though I am trying really hard to save energy I do not do it to be green.
I do it to provide me freedom to do other stuff, money and time wise.
But HA had a quite large userbase and this just requires a few clicks then you have a sensor, which many think tells them how fast their connection is.
In fact it does not tell them what they think, but all the installations put a huge persisting load on the servers making it costly to run those.

Lol. Ok, now you make sense :smiley:

Hint: Disable auto-update and automate it, but not on the hour; you would then be in the wagon, and that might influence the result, if it even work at all.

I add a random delay:

- id: 'ce0e40ef-74f5-487b-899d-1ab66f2e0fd7'
  alias: 'Update speedtest'
  mode: restart
  trigger:
    - platform: time_pattern
      hours: "/3"
  action:
    - delay: "00:06:14"
    - service: homeassistant.update_entity
      target:
        entity_id: sensor.speedtest_download

I would never run a speedtest on a fully live system.
I would turn off as much devices as possible or better yet disconnect the entire lan and only have the test computer connected.
Then I would make sure as much services is disabled in the test computer, especially network using services,like updates.
First then would I do a test.

Once again: You don’t want a precise value. You won’t get it and you don’t care.
You want a derivative, i.e. you want to know if, across multiple runs, maybe a week worth, on average, the bandwidth went down, was stable or, surprise, went up.

That’s my graph over a month.
Obviously, there are ups and downs, but obviously my bandwidth is stable on average, and inside the “guaranteed bandwidth” of my ISP, so all good.

Here I had an issue. I contacted my ISP which did something on my line. It went above “normal” for a while, before stabilizing on the usual figures

It was totally, definitely useful to have those speetests…

Another example:

image

This was part of a test to see how effective QoS was (enabled on the router). In this case Home Assistant was the only device that was throttled and you can see from the graph that it was very effective indeed. Now I can apply it to other devices in an informed way. Like that bloody Xbox.