-
Notifications
You must be signed in to change notification settings - Fork 220
After updating Operator operator doesn't seem to reconnect properly on kubernetes clusters #657
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
CC @metacosm |
It is possible that this got fixed in 1.9.11 / 2.0.0 (java/quarkus sdks). We were running with quarkus 2.0.0.CR2 which was based on an older version of the java SDK which had a bug with watches not being recreated after a timeout. I think this issue might be from that bug, in which case we can test and close/verify. |
Which version is causing the issue? We've fixed an issue with watchers not being able to reconnect to the server in 1.9.11. |
We were using quarkus-sdk 2.0.0.CR2 which was before 1.9.11 was released. 2.0.0 looks like it is based on 1.9.11, correct? |
Yes, 2.0.0 is using 1.9.11. |
Closing for now. Please re-open if you find that the issue is still present with the latest version. |
Uh oh!
There was an error while loading. Please reload this page.
Bug Report
We have been using Operator SDK in real production like scenarios. With +200 000 secrets on single kube cluster etc.
In that scenario we have noticed that Operator connection can be unstable and disconnect very often.
@secondsun have done fix to restart operator when connection is dropped but it looks like in recent versions this part of the code is not triggered due to connection being kept by underlying watcher. Problem is that we see that watchers have been idle and not responding - meaning that java operator SDK operators been running but not responding to any requests properly.
This is quite challenging with Java Operator SDK - we have seen "data loss". Golang based operators work better on such clusters mainly because their architecture checks for the CRs in event loop (rather than relying on the watch)
According to @secondsun:
Logs:
https://gist.github.com/secondsun/7abd69a12e5a393841c0edd8156dcc1d
You can see difference in versions in PR that downgrades them
https://github.com/redhat-developer/app-services-operator/pull/288/files
The text was updated successfully, but these errors were encountered: