2022.12: Scrape and scan interval

Yes, how do you disable auto update?

1 Like

Disable polling in the system options for that entity (I haven’t updated to 2022.12 just yet, so I don’t have the new UI-based scrape integration, but the setting should be the same as any other UI-configured polled entity:

  1. Click the three-dot menu in the scrape integration card for that entity, in the Integrations page of your Home Assistant Settings
    Screen Shot 2022-12-08 at 12.00.00 PM

  2. Turn off “Enable polling for updates”

Congratulations! :slight_smile: Now you’re responsible for updating this entity (for example, with an automation calling the homeassistant.update_entity service). I like the flexibility to change how often I update an entity like this; I can base it off other conditions (e.g. poll a radio station website frequently if I’m actively listening to it so I can get song title and artist info, otherwise, don’t bother polling at all).

3 Likes

Both UI and YAML setup is still supported while YAML provides additional configuration possibilities.
See Scrape - Home Assistant

Yes, I have read that. But will it be automatically moved to UI at some point in time like it has been done for many integrations in the past?

What will happen with those additional possibilities when yaml is not supported anymore? Or maybe there are no plans to drop the yaml support?

Thank you very much, works like a charm! A little bit more cumbersome than the scan_interval approach, but I really like the full flexibility a lot!

1 Like

Or if a deprecated functionality (like SCAN_INTERVAL) is never being ported to the UI?

Scan interval is removed in the ui because the intention is for the user to create an automation to force updates using whatever trigger they want.

1 Like

Thanks.

Do you happen to have a time pattern that runs every hour (or 3600 seconds) after the previous run. So not at a fixed time at the hour. So basically it would run when HA is started, and then exactly 60 minutes afterwards?

hours: *
minutes: 0
seconds: 0

This will run every hour at a fixed time

just use

  trigger:
    - platform: time_pattern
      hours: "/1"
1 Like

Thanks for the tip, but that does NOT work as expected. This pattern triggers an automation every hour at the hour. Exactly as my (false) example above.

I am not sure I understand.

I do have warning, never so seriously exposing in HA - In Settings subpage.

image

It states that the Scrape YAML configuration
“stops working in version 2022.0”. Then it states “Your existing YAML configuration will be working for 2 more versions.”

It states as well “Migrate your YAML configuration to the integration key according to the documentation.” It is not linked to the documentation but, if I find it correctly (this link ) the only mention about migration… about UI version at all on this page is:

Both UI and YAML setup is supported while YAML provides additional configuration possibilities.

1 Like

read here, it helps :wink: : The new way to SCRAPE

It does not answer @QbaF question in any way. The warning says that yaml will be discontinued in 2 releases while the documentation says both are supported. Moreover it states that there are more configuration possibilities with yaml, which means that you can’t migrate those to GUI.

Where did I say, that it is a final solution? I’m trying to help, sending people to a place, where discussion is going on on the topic, no more, than that.
Reactions like yours are even less productive, than what I did…

1 Like

Actually that is not what the repair says. The repair says that configuration for scrape in yaml has been changed. In the past you did it in the legacy way using the platform key by putting something like this in your configuration.yaml:

sensor:
- platform: scrape
  resource: http://xxx/
  select: td
  index: 0
  name: some td
...

This form of yaml configuration has been deprecated. Now yaml configuration must be done in the current best practice way, under the integration key, by putting something like this in your configuration.yaml:

scrape:
- resource: http://xxx/
  sensor:
  - select: td
    index: 0
    name: some TD
  ...

Alternatively you may configure it in the GUI. But as the repair notes, the GUI doesn’t have feature parity with yaml right now so that may not be possible for you.

In two releases the first style of yaml config will stop working. The new style of yaml config shown second is not deprecated at all and does not have an end of life date. It also has full feature parity with the legacy style of yaml config.

6 Likes

No, read it again, please:
image
It says that it has been moved to “integration key” and existing YAML will be working for 2 more versions. Documentation does not say anything about “integration key” - so I’m actually not even sure what it is… But I’m assuming it’s the GUI / config flow / Settings → Devices → Add integration.
It does not say at all that you can fix it in YAML to be compatible with current way of configuring Scrape.
But maybe my English is not good enough to understand what’s written on this message?

1 Like

except the scan_interval , which now has to be done with an automation, or just leave at default 10 seconds.

The integration key is scrape. That’s what that means. Hence why all the examples in the documentation were updated to that new style.

Also, hi, I’m mdegat01 on GitHub. I work for nabu casa. I promise I know what I’m talking about.

EDIT: if you want more info, integration key is the unique identifier of the integration. For each integration you can find it in it’s manifest under domain, here’s scrapes

It is kind of a technical term though. I’d agree that repair isn’t totally clear what it’s asking you to do. It is really just trying to tell you to look at the doc and update accordingly.

5 Likes

The documentation has been updated yesterday, I’m sure, the whole page is way different from, what it was before yesterday :wink:

The correct new way is now written there for the YAML way.

When config is being re-written to a new style, part of that work always includes an audit. Features creep in over time, they aren’t always good decisions. If one is deemed to be a bad decision that is the perfect time to deprecate and remove it and keep it out of the new stuff.

Feature parity doesn’t always mean the new thing does everything the old thing did. It means the new thing does everything the old thing did that should be kept and everything else gets deprecated. That was the decision with scan_interval.

1 Like