-
-
Notifications
You must be signed in to change notification settings - Fork 5.8k
[ARM64] finish script lifetime reached maximum value - sending it a SIGKILL #11963
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Here is the
|
could it be that gitea could not finish database migrations in time and gets killed by supervisor? |
That sounds plausible. How could I give it more time? |
that should be in s6 superviser script, don't know much about it tho |
From https://github.com/just-containers/s6-overlay:
I will test it now and report back. |
My bad, it turns out that Gitea doesn't use s6-overlay as a Docker base image, so the above won't work. How should I change the kill times manually? |
oh, did not notice it was in docker, sorry. Can't really comment much on this topic |
From https://skarnet.org/software/s6/servicedir.html:
I have let it run now and will report back. |
So far it seems like Gitea took all of my Pi's RAM (4GB) and it has now OOM'ed, killing SSH. |
I have the same memory issue on a RaspberryPi 3. It is not crashing the RaspberryPi, but uses almost all its RAM. |
After hard restarting the Raspberry Pi I disabled all Docker services, leaving nearly all of the 4GB for Gitea to consume. So I started Gitea again, and to my surprise, within 30 seconds and less than 3GB usage, the migration completed, and everything is back to normal. No damage or data loss is evident. Perhaps it was nearly done when it OOM'd before the hard restart. While this worked out for me with 4GB RAM, I'm not sure people with less powerful SBCs would have the same luck. I will leave this open for discussion. EDIT: Would using a different DB, e.g. MySQL, reduce the RAM usage? |
I am using a memory limit now in Docker for my gitea setup. Hopefully the migration continues without issues |
Just out of curiosity, how did you set the memory limits? |
If it was memory issue, PR #11975 could probably resolve it |
Here a link to the docker documentation: If you are using docker-compose you might want to take a look here: |
Ah yes, but unfortunately these settings only work in Swarm mode as of Compose file version 3:
I thought that you had found a different approach. |
That's why I still use the old version of compose -> don't want to move to swarm |
This issue has been automatically marked as stale because it has not had recent activity. I am here to help clear issues left open even if solved or waiting for more insight. This issue will be closed if no further activity occurs during the next 2 weeks. If the issue is still valid just add a comment to keep it alive. Thank you for your contributions. |
This issue has been automatically closed because of inactivity. You can re-open it if needed. |
Technical
Log
Description
This has been repeating for about 2 hours, taking up 100% CPU in the process.
The text was updated successfully, but these errors were encountered: