0

I have the following (very simplified) network :

(ingress) -> DMZ (nginx) -> HA Proxy -> Reverse proxy (nginx) -> Application Server (tomcat)

All the layers send information to Prometheus and then we use Grafana to monitor them.

We had some latency issues. So we had to analyse each layer individually to find which layer is the slowest.

Question : By selecting a random request (from the DMZ), is there a way to trace it through the layers it went through and having the time it took at each level ? Something like :

(Request) -> 
DMZ (nginx) -> 2ms
HA Proxy -> 1 ms
Reverse proxy (nginx) -> 1ms
Application Server (tomcat) 15ms
Reverse proxy (nginx) -> 1ms
HA Proxy -> 1 ms
DMZ (nginx) -> 2ms
(Response)
  • 1
    i try to understand why you have 2 instances of nginx AND haproxy, it will make it really difficult to track down issues and moreover in this case you have really to post from every service its log and config mostly i would suggest in my mind, that the first nginx has to cache to improve Performance – djdomi Sep 07 '19 at 16:29
  • @djdomi thanks for your comment. We use the haproxy as a load balancer. The question is more about traceability :) – ehi84636 Sep 07 '19 at 17:07
  • but why you then dont use it in the front as first point of Contact, every instance less is a point of speed more – djdomi Sep 07 '19 at 17:23
  • We could but it's complicated and out of question's scope. My question is more about how to have a detailed latency report grouped by layers for a given request. – ehi84636 Sep 07 '19 at 20:00
  • nope, your thonking is wrongly, you have to think about it in this case when you want to analyse this, but almostly tomcat will be the slowest in the chain :) – djdomi Sep 07 '19 at 20:28

0 Answers0