0

So, I have a very basic Python Flask app with uWSGI where upon application load, a background job reads data into memory and requests just serve this data (no updates). This data is probably ...1200kB in total

When I deploy this to our shared kubernetes environment with 2 replicas, suddenly I see

"* uWSGI listen queue of socket ":9000" (fd: 8) full !!! (101/100) *"

There's only one other tool that hits the service every now and then (every 15 or 20 mins). And the traffic is nil (i.e. no other clients are querying it apart from us devs). My uWSGI file app.ini looks like this:

# The following article was referenced while creating this configuration
# https://www.techatbloomberg.com/blog/configuring-uwsgi-production-deployment/
# Please make changes according to your application's expected load, etc
[uwsgi]
strict = true                          ; Only valid uWSGI options are tolerated
master = true                          ; The master uWSGI process is necessary to gracefully re-spawn and pre-fork workers,
                                       ; consolidate logs, and manage many other features
enable-threads = true                  ; To run uWSGI in multithreading mode
vacuum = true                          ; Delete sockets during shutdown
single-interpreter = true              ; Sets only one service per worker process
die-on-term = true                     ; Shutdown when receiving SIGTERM (default is respawn)
need-app = true                        ; Prevents uWSGI from starting if it is unable to find or load your application module

;disable-logging = true                 ; By default, uWSGI has rather verbose logging. Ensure that your
;log-4xx = true                         ; application emits concise and meaningful logs. Uncomment these lines
;log-5xx = true                         ; if you want to disable logging

# Logging for UWSGI
req-logger = file:/opt/docker/logs/application.log
logger = file:/opt/docker/logs/application.log
logformat = %(ltime) : PID %(pid) : %(proto) : %(uagent) : %(method) : %(uri) : %(status)
                                       ; (ltime) : Human readable timestamp (Apache format)
                                       ; (pid)   : Worker pid
                                       ; (proto) : Protocol
                                       ; (uagent): User Agent
                                       ; (method): Request method
                                       ; (uri)   : Request URI

cheaper-algo = busyness
processes = 2                        ; Maximum number of workers allowed
threads = 10                         ; Threads per process
cheaper = 1                          ; Minimum number of workers allowed - default 1
cheaper-initial = 1                  ; Workers created at startup
cheaper-overload = 1200              ; Will check busyness every 1200 seconds
cheaper-busyness-max = 2400          ; maximum busyness we allow
cheaper-step = 1                     ; How many workers to spawn at a time

This works perfectly well with 1 replica but that weird message suddenly comes up across both pods when it's 2 replicas. I also came across this question on SO but it doesn't tell me anything more than I already know/do. Why am I seeing this issue, what suddenly overloads the uWSGI listen queue? What am I missing here?

Saturnian
  • 111
  • 1

0 Answers0