Works perfectly for me as this is the same method my council use for the bin collections.
For the days left I used the following python script:
import datetime
import bin_collection_dates
from datetime import *
today = date.today()
future = bin_collection_dates.next_collection
diff = future - today
days_left = str(diff.days)
print(days_left)
This assumes that your bin collection dates script is called bin_collection_dates.py
Thanks for the scripts @Rookeh!!!
HI,
sorry to bump into this thread, but I would really like to have this sensor for days_left.
I use another waste package (@xirixiz’s mijnafvalwijzer) and have a sensor giving me the next pickup in this format:
- platform: scrape
resource: !secret scrape_resource_date
name: Afval Datum
select: ".firstDate"
scan_interval: 60
with this as output:
would there be any way to reformat that, and use it in a template to count the days left?
thanks for any help
How about something like this?
thanks,
I’ll note that one.
it doesn’t work though in my setting, because my sensor.afval_datum doesn’t return a timestamp, but a string.
Can we somehow make a timestamp out of a string?
Can you try something like
Would anyone be able to help with the following website? https://www.sedgemoor.gov.uk/article/1448?n=148663&e=331382&u=10009328112.
Unable to scrape the information required, and cannot install pup to be able to scrape that info either.
Any help would be appriciated!
I’ll give you a hand mate after Christmas- struggling for time at the min
Your council website is pretty easy to read - have you tried a simple scrape sensor
- platform: scrape
resource: "https://www.sedgemoor.gov.uk/article/1448?n=148663&e=331382&u=10009328112"
name: "Refuse Collection"
select: ".mysedgemoor-list__item__wrapper:nth-child(9)"
scan_interval: 300
and
- platform: scrape
resource: "https://www.sedgemoor.gov.uk/article/1448?n=148663&e=331382&u=10009328112"
name: "Garden Collection"
select: ".mysedgemoor-list__item__wrapper:nth-child(11)"
scan_interval: 300
Thanks Rob for the help! The refuse scrape worked, but for the Garden Collection i just get unknown for this. I’ve tired to play around with the nth-child but nothing appears
Can you try
select: “.mysedgemoor-list__item__wrapper:nth-child(11) .mysedgemoor-list__data”
Just tried that one again, it is unable to scrape the information
For completeness here is @Stringyb92’s solution for anyone else interested.
- platform: scrape
resource: "https://www.sedgemoor.gov.uk/article/1448?n=148663&e=331382&u=10009328112"
name: "Refuse Collection Scrape"
select: ".content__wrapper--withmenu > div:nth-of-type(7) > div:nth-of-type(2)"
scan_interval: 86400
headers:
User-Agent: Mozilla/5.0
- platform: scrape
resource: "https://www.sedgemoor.gov.uk/article/1448?n=148663&e=331382&u=10009328112"
name: "Garden Waste Collection Scrape"
select: ".content__wrapper--withmenu > div:nth-of-type(9) > div:nth-of-type(2)"
scan_interval: 86400
headers:
User-Agent: Mozilla/5.0
Hello,
I get the JSON, listed below back from my community when I do a post to their API:
How can I filter out the next collection date from this JSOn and display it in my dashboard? It must be done with value templating and json but I can’t figure it out exactly.
With kind regards,
Remco
The Netherlands
[[
{
"pickupDates": [
"2019-01-16T00:00:00",
"2019-02-13T00:00:00",
"2019-03-13T00:00:00",
"2019-04-10T00:00:00",
"2019-05-08T00:00:00",
"2019-06-05T00:00:00",
"2019-07-03T00:00:00",
"2019-07-31T00:00:00",
"2019-08-28T00:00:00",
"2019-09-25T00:00:00",
"2019-10-23T00:00:00",
"2019-11-20T00:00:00",
"2019-12-18T00:00:00"
],
"pickupType": 0,
"_pickupType": 0,
"_pickupTypeText": "GREY",
"description": "GREY"
},
{
"pickupDates": [
"2019-01-02T00:00:00",
"2019-01-30T00:00:00",
"2019-02-27T00:00:00",
"2019-03-27T00:00:00",
"2019-04-24T00:00:00",
"2019-05-22T00:00:00",
"2019-06-19T00:00:00",
"2019-07-17T00:00:00",
"2019-08-14T00:00:00",
"2019-09-11T00:00:00",
"2019-10-09T00:00:00",
"2019-11-06T00:00:00",
"2019-12-04T00:00:00",
],
"pickupType": 10,
"_pickupType": 10,
"_pickupTypeText": "PACKAGES",
"description": "PACKAGES"
},
{
"pickupDates": [
"2019-01-09T00:00:00",
"2019-01-23T00:00:00",
"2019-02-06T00:00:00",
"2019-02-20T00:00:00",
"2019-03-06T00:00:00",
"2019-03-20T00:00:00",
"2019-04-03T00:00:00",
"2019-04-17T00:00:00",
"2019-05-01T00:00:00",
"2019-05-15T00:00:00",
"2019-05-29T00:00:00",
"2019-06-12T00:00:00",
"2019-06-26T00:00:00",
"2019-07-10T00:00:00",
"2019-07-24T00:00:00",
"2019-08-07T00:00:00",
"2019-08-21T00:00:00",
"2019-09-04T00:00:00",
"2019-09-18T00:00:00",
"2019-10-02T00:00:00",
"2019-10-16T00:00:00",
"2019-10-30T00:00:00",
"2019-11-13T00:00:00",
"2019-11-27T00:00:00",
"2019-12-11T00:00:00"
],
"pickupType": 1,
"_pickupType": 1,
"_pickupTypeText": "GREEN",
"description": "GREEN"
},
{
"pickupDates": [
"2019-01-21T00:00:00",
"2019-03-04T00:00:00",
"2019-04-15T00:00:00",
"2019-05-27T00:00:00",
"2019-07-08T00:00:00",
"2019-08-19T00:00:00",
"2019-09-30T00:00:00",
"2019-11-11T00:00:00",
"2019-12-23T00:00:00",
"2019-01-21T00:00:00",
"2019-03-04T00:00:00",
"2019-04-15T00:00:00",
"2019-05-27T00:00:00",
"2019-07-08T00:00:00",
"2019-08-19T00:00:00",
"2019-09-30T00:00:00",
"2019-11-11T00:00:00",
"2019-12-23T00:00:00"
],
"pickupType": 2,
"_pickupType": 2,
"_pickupTypeText": "PAPER",
"description": "PAPER"
}
]
https://www.hinckley-bosworth.gov.uk/collections
How easy is that one to scrape? could you provide assistance with that one please?
Not that easy - I grabbed a sample url below : if you follow my original post using this url it might work for you
curl 'https://www.hinckley-bosworth.gov.uk/collections' -H 'Connection: keep-alive' -H 'Cache-Control: max-age=0' -H 'Upgrade-Insecure-Requests: 1' -H 'User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36' -H 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8' -H 'Referer: https://www.hinckley-bosworth.gov.uk/address-search?location=change&redirect=refuse&fpcode=LE10+0FR&layer=' -H 'Accept-Encoding: gzip, deflate, br' -H 'Accept-Language: en-US,en;q=0.9' -H 'Cookie: lypkOO2pWJKOxrF8=5detrahedrqa4dukbs724tdu5ng6n9g2r5taq1hi7hnimfgo87n0; TestCookie=Test; __utma=258801043.1600286332.1549929041.1549929041.1549929041.1; __utmc=258801043; __utmz=258801043.1549929041.1.1.utmcsr=community.home-assistant.io|utmccn=(referral)|utmcmd=referral|utmcct=/; __utmt=1; mylocation=%7B%22postcode%22%3A%22LE100FR%22%2C%22myaddress%22%3A%22Hinckley+And+Bosworth+Borough+Council%2C+The+Hinckley+Hub%2C+Rugby+Road%2C+Hinckley%2C+Leicestershire%2C+LE10+0FR%22%2C%22uprn%22%3A%22010090028045%22%2C%22usrn%22%3A%2217401086%22%2C%22ward%22%3A%22CAS%22%2C%22parish%22%3A%22%22%2C%22lng%22%3A%22-1.3763956159446%22%2C%22lat%22%3A%2252.535658758267%22%7D; __utmb=258801043.4.10.1549929041; socitm_exclude_me29=true' --compressed
Has anyone tried this for Hounslow council in London? https://www.hounslow.gov.uk/homepage/86/recycling_and_waste_collection_day_finder#collectionday
Thanks!
Hi,
I recognised the structure of the json-file, so your municipality also uses https://2go-mobile.com/referenties ? This also used in among others:
- Ede
- Renkum
- Veenendaal
- Wageningen
- Almere
- Daarle
- Daarlerveen
- Haarle
- Hellendoorn
- Nijverdal
- Almelo
- Borne
- Enschede
- Haaksbergen
- Hengelo
- Hof van Twente
- Losser
- Oldenzaal
- Wierden
- Coevorden / Zweeloo
- Emmen
- Hoogeveen
Did you find a way to parse it?
I now have verry clumsy way of getting the dates (i’m a complete noob)
- platform: command_line
name: "Fetch date"
value_template: '{{ value_json.dataList[0].pickupDates[0] }},{{ value_json.dataList[0]._pickupTypeText }} ; {{ value_json.dataList[1].pickupDates[0] }},{{ value_json.dataList[1]._pickupTypeText }} ; {{ value_json.dataList[2].pickupDates[0] }},{{ value_json.dataList[2]._pickupTypeText }} ; {{ value_json.dataList[3].pickupDates[0] }},{{ value_json.dataList[3]._pickupTypeText }} ; {{ value_json.dataList[4].pickupDates[0] }},{{ value_json.dataList[4]._pickupTypeText }}'
command: >-
curl 'https://wasteapi.2go-mobile.com/api/GetCalendar' -H 'origin: https://afvalportaal.2go-mobile.com' -H 'accept-encoding: gzip, deflate, br' -H 'accept-language: nl-NL,nl;q=0.9,en-US;q=0.8,en;q=0.7' -H 'user-agent: Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36' -H 'content-type: application/json;charset=UTF-8' -H 'accept: application/json, text/plain, */*' -H 'referer: https://afvalportaal.2go-mobile.com/modules/REDACTED/kalender/calendar.html?REDACTED' -H 'authority: wasteapi.2go-mobile.com' -H 'dnt: 1' --data-binary '{"companyCode":"REDACTED","startDate":"2019-02-17","endDate":"2022-01-09","uniqueAddressID":"REDACTED"}' --compressed
scan_interval: 14400
If there is anyone with a better solution, please let me know.
june 1st 2019:
found a custom component made by @yniezink, https://github.com/yniezink/twentemilieu. You will have to adapt this to your municipality.
august 9th 2019:
a custom component for Ede, Renkum, Veenendaal and Wageningen can be found here: https://github.com/Cadsters/acv-hass-component
Can anyone help me whether I can use my local service?
It is available here: https://varde-sb.renoweb.dk/Legacy/selvbetjening/mit_affald.aspx
From there you write an address in the field and search. The data I need is then in the field “Der er tilmeldt følgende materiel”.
Now I know zero coding or anything, but I pressed F12 in my Chrome and figured out where the data I need is at, and this seems to be where it is:
https://varde-sb.renoweb.dk/Legacy/JService.asmx/GetAffaldsplanMateriel_mitAffald
Else, it is actually pretty simple. It regular is picked up mondays in uneven weeks, so next is on monday 25/02 and then it is always 14 days from there. Recycling is every 4. week, also next time is this monday 25/02.
Maybe it is easier to just make a dumb script of some sort for this?
you need to figure out what your address ID is, then post this as json to the URL,
e.g.
Posting
{"adrid":26275}
to the URL:
https://varde-sb.renoweb.dk/Legacy/JService.asmx/GetAffaldsplanTakst_mitAffald
Would return:
{
"d": "{\"list\":[{\"id\":135274,\"navn\":\"140 L cont 14 dg helårsbolig\",\"modul\":{\"id\":1,\"name\":\"Dagrenovation\",\"shortname\":\"\"},\"antal\":\"1\",\"totalpris\":\"1.017,50 kr.\"},{\"id\":21757,\"navn\":\"Genbrugsordning bolig\",\"modul\":{\"id\":2,\"name\":\"Genbrug\",\"shortname\":\"\"},\"antal\":\"1\",\"totalpris\":\"1.480,00 kr.\"},{\"id\":94546,\"navn\":\"240 L genbrug 4. uge\",\"modul\":{\"id\":2,\"name\":\"Genbrug\",\"shortname\":\"\"},\"antal\":\"1\",\"totalpris\":\"0,00 kr.\"}],\"status\":{\"id\":0,\"status\":\"Ok\",\"msg\":\"\"}}"
}
There’s potentially a few other calls made as well, but the language makes it a little difficult for me to understand
Might be easier to use something like node-red, otherwise you could use wget/curl to call it from a shell script and parse however needed.
Appreciate though that if you know zero about coding this might be difficult to achieve for you