Tesla-History / error recovery #356
Replies: 1 comment 2 replies
-
@youzer-name - amazing timing on your post, as I am currently right now working on a new script based on the tesla-history and had come across this potential issue... I am adding a "retry" option for when InfluxDB is unavailable or other errors occur, that should address this. I am also considering/pondering that perhaps the docker-compose config for influxdb should have a "healthcheck" clause added, so that any service that "depends_on" influxdb will have a delayed startup or possibly restart if it becomes unhealthy. Still testing... however I think it will be better to have a retry in the tesla-history script itself. I will roll my changes/improvements from the new script into the tesla-history as well, once I have sorted this out. |
Beta Was this translation helpful? Give feedback.
-
@mcbirse I've been having some "hiccups" on my Raspberry Pi that hosts Powerwall Dashboard, and I've had to restart the docker daemon a few times over the past few days to fix things. On a few occasions the tesla-history container has thrown this error:
I'm not sure if there is already some retry logic going on before it throws the error, but I don't see anything in the logs to indicate there is. Would it be a good idea to have the script retry in case the issue is temporary/intermittent?
It isn't a huge problem as-is, since I have monitoring set up to notify me if there is no new data going into the database,. When the data goes stale I just restart the container and manually run the script for --today and/or --yesterday to fill in any gaps. But it would be nice if the script could recover automatically.
Beta Was this translation helpful? Give feedback.
All reactions