-
Notifications
You must be signed in to change notification settings - Fork 0
Open
Description
Just encountered an issue with uwsgi when workers segfaulted during shutdown which led supervisord to believe they were still running.
2025-10-24 01:56:57,670 INFO [ckan.config.middleware.flask_app] 200 / render time 0.010 seconds
2025-10-24 01:57:57,218 INFO [ckan.config.middleware.flask_app] 200 /api/3/action/status_show render time 0.002 seconds
2025-10-24 01:57:57,670 INFO [ckan.config.middleware.flask_app] 200 / render time 0.010 seconds
2025-10-24 01:58:57,210 INFO [ckan.config.middleware.flask_app] 200 /api/3/action/status_show render time 0.002 seconds
2025-10-24 01:58:57,673 INFO [ckan.config.middleware.flask_app] 200 / render time 0.010 seconds
2025-10-24 01:59:57,205 INFO [ckan.config.middleware.flask_app] 200 /api/3/action/status_show render time 0.002 seconds
2025-10-24 01:59:57,664 INFO [ckan.config.middleware.flask_app] 200 / render time 0.010 seconds
2025-10-24 02:00:57,214 INFO [ckan.config.middleware.flask_app] 200 /api/3/action/status_show render time 0.002 seconds
2025-10-24 02:00:57,666 INFO [ckan.config.middleware.flask_app] 200 / render time 0.014 seconds
2025-10-24 02:01:57,217 INFO [ckan.config.middleware.flask_app] 200 /api/3/action/status_show render time 0.002 seconds
2025-10-24 02:01:57,671 INFO [ckan.config.middleware.flask_app] 200 / render time 0.010 seconds
2025-10-24 02:02:08 - worker 1 lifetime reached, it was running for 3601 second(s)
2025-10-24 02:02:08 - worker 2 lifetime reached, it was running for 3601 second(s)
2025-10-24 02:02:08 - worker 3 lifetime reached, it was running for 3601 second(s)
2025-10-24 02:02:08 - worker 4 lifetime reached, it was running for 3601 second(s)
2025-10-24 02:02:09 - !!! uWSGI process 1662 got Segmentation Fault !!!
2025-10-24 02:02:09 - !!! uWSGI process 1661 got Segmentation Fault !!!
2025-10-24 02:02:09 - !!! uWSGI process 1663 got Segmentation Fault !!!
2025-10-24 02:02:09 - !!! uWSGI process 1660 got Segmentation Fault !!!
2025-10-24 02:59:48 - *** uWSGI listen queue of socket "127.0.0.1:8080" (fd: 6) full !!! (100/100) ***
2025-10-24 02:59:49 - *** uWSGI listen queue of socket "127.0.0.1:8080" (fd: 6) full !!! (100/100) ***
2025-10-24 02:59:50 - *** uWSGI listen queue of socket "127.0.0.1:8080" (fd: 6) full !!! (100/100) ***
2025-10-24 02:59:51 - *** uWSGI listen queue of socket "127.0.0.1:8080" (fd: 6) full !!! (100/100) ***
2025-10-24 02:59:52 - *** uWSGI listen queue of socket "127.0.0.1:8080" (fd: 6) full !!! (100/100) ***
2025-10-24 02:59:53 - *** uWSGI listen queue of socket "127.0.0.1:8080" (fd: 6) full !!! (100/100) ***
2025-10-24 02:59:54 - *** uWSGI listen queue of socket "127.0.0.1:8080" (fd: 6) full !!! (100/100) ***
2025-10-24 02:59:55 - *** uWSGI listen queue of socket "127.0.0.1:8080" (fd: 6) full !!! (100/100) ***
2025-10-24 02:59:56 - *** uWSGI listen queue of socket "127.0.0.1:8080" (fd: 6) full !!! (100/100) ***
2025-10-24 02:59:57 - *** uWSGI listen queue of socket "127.0.0.1:8080" (fd: 6) full !!! (100/100) ***
2025-10-24 02:59:58 - *** uWSGI listen queue of socket "127.0.0.1:8080" (fd: 6) full !!! (100/100) ***
2025-10-24 02:59:59 - *** uWSGI listen queue of socket "127.0.0.1:8080" (fd: 6) full !!! (100/100) ***
I believe migrating to gunicorn would mitigate such subtle failures.
Metadata
Metadata
Assignees
Labels
No labels