I have a custom docker container to include the drivers for microsoft sql server and it was working well until the upgrade to 0.76.
FROM homeassistant/home-assistant:latest
RUN apt-get update && \
apt-get install -y freetds-dev unixodbc-dev unixodbc-bin && \
pip3 install pyodbc pymssql
I get this error many times in the log and then the recorder fails to setup.
2018-08-17 13:30:42 ERROR (Recorder) [homeassistant.components.recorder] Error during connection setup: (pymssql.OperationalError) (156, b"Incorrect syntax near the keyword 'COLUMN'.DB-Lib error message 156, severity 15:\nGeneral SQL Server error: Check messages from the SQL Server\n") [SQL: 'ALTER TABLE events ADD COLUMN context_id CHARACTER(36)'] (Background on this error at: http://sqlalche.me/e/e3q8) (retrying in 3 seconds)
Edit: It looks like MSSQL doesn’t want the word column in it. the record/migration.py, _add_columns method is generating a command that isn’t processed by MSSQL.
Yeah, came across that just a moment ago. I’m going to give it a go by hand and see. I hate that different databases have slight tweaks to the structure. I thought that was what something like sqlalchemy was suppose to work around.
Of course, when you just execute a sql statement directly, nothing sqlalchemy can do. Maybe there is another way to use the tool to generate schema changes?
Edit: This is what I did. So far, so good. I spun up a new database and instance of HA and let it create the database, then ran a schema compare against them and things look good.
Edit 2: I first created the new columns as CHARACTER (which is what the code says) but to match the newly created database, they need to be VARCHAR. Updated below:
ALTER TABLE events ADD context_id VARCHAR(36)
ALTER TABLE events ADD context_user_id VARCHAR(36)
CREATE NONCLUSTERED INDEX [ix_events_context_id] ON [dbo].[events] ([context_id] ASC )
CREATE NONCLUSTERED INDEX [ix_events_context_user_id] ON [dbo].[events] ([context_user_id] ASC )
ALTER TABLE states ADD context_id VARCHAR(36)
ALTER TABLE states ADD context_user_id VARCHAR(36)
CREATE NONCLUSTERED INDEX [ix_events_context_id] ON [dbo].[states] ([context_id] ASC )
CREATE NONCLUSTERED INDEX [ix_events_context_user_id] ON [dbo].[states] ([context_user_id] ASC )
INSERT INTO [dbo].[schema_changes](schema_version, changed) VALUES(6, SELECT GETUTCDATE())
I have the same problem and more problem also with history graph, recorder, history, logbook… How resolved this problems after update to 0.76 ? On 0.75.3 all works perfect.
Sorry, I’ll have to defer to someone else on how to do this for MySQL. I could figure out the commands but I don’t have a MySQL install to test and verify.
The query will be fixed in 0.76.1 and the word COLUMN will be removed.
@BullFrog I’ve seen your snarky comments in various threads on the forums. They are not appreciated. This is open source. You’re welcome to join development or help with beta testing before major releases.
Due to the fact that I have a larger database in mySQL and was seeing posts like these, I manually updated the database to be sure using the following statements:
ALTER TABLE homeassistant.events ADD context_id VARCHAR(36);
ALTER TABLE homeassistant.events ADD context_user_id VARCHAR(36);
ALTER TABLE homeassistant.events ADD INDEX ix_events_context_id (context_id);
ALTER TABLE homeassistant.events ADD INDEX ix_events_context_user_id (context_user_id);
ALTER TABLE homeassistant.states ADD context_id VARCHAR(36);
ALTER TABLE homeassistant.states ADD context_user_id VARCHAR(36);
ALTER TABLE homeassistant.states ADD INDEX ix_states_context_id (context_user_id);
ALTER TABLE homeassistant.states ADD INDEX ix_states_context_user_id (context_id);
INSERT INTO homeassistant.schema_changes(schema_version, changed) VALUES(6, NOW());
My events and states tables have about 2520890 records taking about 2 hours on a Raspberry Pi 3 per table to migrate.
Another issue that was mentioned several times was that the tmp directory would run out of space, so I temporarily changed the tmpdir setting in the my.cnf configuration of mySQL and pointed it to a larger USB stick that is mounted on the Raspberry Pi.
Perhaps try again now with the latest update 76.1 or give it a shot manually if you are comfortable doing so. Always remember to have a backup first before attempting it.
Thank you @geoffrey, this fixed my issue. I guess I wasn’t patient enough the first time and restarted before the process finished. I used your statements to finish adding the objects that were missing and that took an extra 20-30 minutes but it is working fine now.
How can we know that the database update is running and/or has finished? I have a very powerful machine running a docker container for both HA and MySQL so I’m thinking it won’t take as long as on a RPI, but how to tell when it’s done?