Scrape Sensor gives IndexError

When I use the Scrape Sensor, I receive the following errors in the log file for a certain instance.

2017-08-16 23:45:00 ERROR (MainThread) [homeassistant.core] Error doing job: Task exception was never retrieved
Traceback (most recent call last):
  File "/usr/lib/python3.4/asyncio/tasks.py", line 233, in _step
    result = coro.throw(exc)
  File "/srv/homeassistant/lib/python3.4/site-packages/homeassistant/helpers/entity_component.py", line 381, in async_process_entity
    new_entity, self, update_before_add=update_before_add
  File "/srv/homeassistant/lib/python3.4/site-packages/homeassistant/helpers/entity_component.py", line 212, in async_add_entity
    yield from self.hass.async_add_job(entity.update)
  File "/usr/lib/python3.4/asyncio/futures.py", line 388, in __iter__
    yield self  # This tells Task to wait for completion.
  File "/usr/lib/python3.4/asyncio/tasks.py", line 286, in _wakeup
    value = future.result()
  File "/usr/lib/python3.4/asyncio/futures.py", line 277, in result
    raise self._exception
  File "/usr/lib/python3.4/concurrent/futures/thread.py", line 54, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/srv/homeassistant/lib/python3.4/site-packages/homeassistant/components/sensor/scrape.py", line 98, in update
    value = raw_data.select(self._select)[0].text
IndexError: list index out of range

I have other Scrape sensors working just fine. The one that gives the issues uses the following configuration:

- platform: scrape
  scan_interval: 3600
  resource: "http://aqicn.org/city/belgium/fla/beveren-waas/"
  select: "td#cur_pm10"
  name: Particulate Matter
  unit_of_measurement: "μg/m3"`

The strange thing is that it sometimes does seem to work, but only a very few times. I have installed beautifulsoup4==4.6.0.

I tried to debug the scrape.py file with breakpoints using PyCharm, but was unsuccessful as I’m new to Python. Is there a way to debug a specific component like in my case?

Thank you in advance!