0

We deploy our application on TomEE 7.0.3 (Tomcat 8.5.11) in Docker images. The production platforms run on Google Kubernetes Engine clusters while development, staging and so on run as plain Docker containers on Linux servers.

On production we see hundereds of gigabytes per month of DEBUG logs getting logged by TomEE to Stackdriver. We are unable to reproduce this on any other system. The containers on production and otherwise are configured identically, except for Java Keystore and database connection. Certainly all the logging.properties and logback.xml files are identical.

It seems that the DEBUG logs are coming from certain components only. We could identify org.apache.cxf and net.sf.ehcache for example. We tried many different adjustments to the JULI and slf4j configuration files to either decrease the log level for these components on production, or increase on the other systems to reproduce the problem locally. But strangely, no modification to any logging.properties or logback.xml files seems to have any effect.

I'm not 100% sure but it seems to me that these logs are on stdout, because in Stackdriver they have the level "INFO"; stderr would be classified as "ERROR" according to the documentation.

The startup of TomEE in the containers happens via a wrapper BASH script which at the end calls bin/catalina.sh run. However we have also tried to leave out the wrapper script and start TomEE from the Dockerfile with CMD ["catalina.sh", "run"]. This doesn't make a difference.

Examples of these logs:

14:56:17.316 [https-jsse-nio-8443-exec-4] DEBUG o.a.cxf.phase.PhaseInterceptorChain - Invoking handleMessage on interceptor org.apache.cxf.jaxrs.interceptor.JAXRSOutInterceptor@21657494
14:56:17.316 [https-jsse-nio-8443-exec-4] DEBUG o.a.c.j.i.JAXRSOutInterceptor - Response content type is: application/json
14:56:17.316 [https-jsse-nio-8443-exec-4] DEBUG o.a.cxf.ws.addressing.ContextUtils - retrieving MAPs from context property javax.xml.ws.addressing.context.inbound
14:56:17.317 [https-jsse-nio-8443-exec-4] DEBUG o.a.cxf.ws.addressing.ContextUtils - WS-Addressing - failed to retrieve Message Addressing Properties from context
14:56:17.317 [https-jsse-nio-8443-exec-4] DEBUG o.a.cxf.phase.PhaseInterceptorChain - Invoking handleMessage on interceptor org.apache.cxf.interceptor.MessageSenderInterceptor$MessageSenderEndingInterceptor@5ff7053d
14:56:17.317 [https-jsse-nio-8443-exec-4] DEBUG o.a.c.t.http.AbstractHTTPDestination - Finished servicing http request on thread: Thread[https-jsse-nio-8443-exec-4,5,main]

and

15:07:14.805 [https-jsse-nio-8443-exec-4] DEBUG o.a.c.i.OutgoingChainInterceptor - Interceptors contributed by bus: []
15:07:14.805 [https-jsse-nio-8443-exec-4] DEBUG o.a.c.i.OutgoingChainInterceptor - Interceptors contributed by service: []
15:07:14.805 [https-jsse-nio-8443-exec-4] DEBUG o.a.c.i.OutgoingChainInterceptor - Interceptors contributed by endpoint: [org.apache.cxf.interceptor.MessageSenderInterceptor
@461a6713]
15:07:14.805 [https-jsse-nio-8443-exec-4] DEBUG o.a.c.i.OutgoingChainInterceptor - Interceptors contributed by binding: [org.apache.cxf.jaxrs.interceptor.JAXRSOutInterceptor
@21657494]
15:07:14.805 [https-jsse-nio-8443-exec-4] DEBUG o.a.cxf.phase.PhaseInterceptorChain - Chain org.apache.cxf.phase.PhaseInterceptorChain@7780f3e8 was created. Current flow:
  prepare-send [MessageSenderInterceptor]
  marshal [JAXRSOutInterceptor]

The very first instance of this after TomEE startup seem to be these lines which I can see on production, but not on any other containers:

12:28:33.575 [main] DEBUG o.apache.cxf.common.logging.LogUtils - Using org.apache.cxf.common.logging.Slf4jLogger for logging.
12:28:33.681 [main] INFO  o.a.c.m.j.InstrumentationManagerImpl - registering MBean org.apache.cxf:bus.id=openejb.cxf.bus,type=Bus,instance.id=996125997: org.apache.cxf.bus.ManagedBus@5e17553a
12:28:33.696 [main] INFO  o.a.c.m.j.InstrumentationManagerImpl - registering MBean org.apache.cxf:bus.id=openejb.cxf.bus,type=Bus,instance.id=996125997: javax.management.modelmbean.RequiredModelMBean@189cbd7c
12:28:33.696 [main] INFO  o.a.c.m.j.InstrumentationManagerImpl - registered org.apache.cxf:bus.id=openejb.cxf.bus,type=Bus,instance.id=996125997

I'd be very glad for any hints as to

  • why TomEE produces DEBUG logs on Google Kubernetes Engine which are not visible on any pure Docker system and
  • why we can't seem to adjust the logging of these components with any of the logging.properties and logback.xml files.
alexander.polomodov
  • 1,060
  • 3
  • 10
  • 14
f.sh
  • 35
  • 5

1 Answers1

0

Stackdriver Logging uses Fluentd as its logging agent. In GKE, Fluentd is controlled by the Master. Usually you would have to configure Fluentd to customize what is logged within Stackdriver; however, since Fluentd is configured by the Master Node, any changes made to it will revert back to its original configuration. It is also configured as a "catch all" where it will log every single action.

If you would like to adjust what is being logged within Stackdriver, then I would suggest using Log Exclusions.

Edit: There are two other options that I can recommend. The first one is to disable Logging completely. You will still be able to see Kubernetes logs within the cluster such as pod and container logs; however, you will have to use kubectl. You will no longer see logs from within Stackdriver Logging.

The other option is to create your own custom kubernetes cluster from scratch within GCE. Even though it would be more a more grueling task to set up, but you will have control of the Master node and be able to configure logging to your liking.

Jason
  • 375
  • 1
  • 5
  • Thank you. We used Log Exclusions in Stackdriver so that the ingested volume does not get too much. However we would of course prefer if we could disable this kind of logging at the source if it's in any way possible. – f.sh Jul 03 '18 at 12:01
  • I have edited my answer to respond to your last comment. – Jason Jul 03 '18 at 14:58