1

I'm in a bit of a bind here and I'm looking for pointers as to how to best manage my situation:

So, I have a Flask app with uwsgi and I'm deploying this to Kubernetes with 2 pods. UWSGI configuration is 1 process and 4 threads are we're really expecting something like ~50 requests/hour so about 1-2 requests/min.

Why 1 process? Because we're loading a lot of data in memory (don't ask me why, I know it's wrong and I finally won my battle to introduce a DB but this will happen later) and UWSGI doesn't guarantee sharing of data across all requests - which means if I had two processes - pid x and pid y - if x has all the data loaded and y decided to service the request, it can return the wrong result. With just one process and multiple threads, I can hope to establish the same result every time (since all threads will refer to the same process' common memory).

With 2 pods(/replicas) - if say one pod goes down (coz memory issues, maybe), the second pods should be able to service the traffic, right(until pod comes up again and loads the data into its memory)?

^ Does this setup make sense? What more can I do to make this more "resilient" as deployment?

(We're also adding alerts & dashboards as a part of monitoring)

Saturnian
  • 111
  • 1
  • If all your data is in memory, how is your second pod going to work? That's just like having another process. The two pods don't share memory; if one pod goes down, you've just lost all that state. If you can use a second pod, you could also use multiple processes. – larsks Aug 31 '22 at 01:20
  • Every pod kicks off its own data fetch process. > if one pod goes down, you've just lost all that state It's all read only operations :) – Saturnian Aug 31 '22 at 14:50

0 Answers0