0

Recently I switch to amazon ec2 + jetty9 + oracle jdk7_u45 for cost saving. I found the jetty server is very unstable. It crash randomly without any jvm dump file.

Tried to enable stdout with the dumpBeforeStop=TRUE. It won't append the dump messages to stderrout.log before crash.

Seems it isn't related to OutOfMemoryError as I have enabled the gc verbose options and found it still has many available memory before crash. : 162604K->3340K(176960K), 0.2240040 secs] 248332K->89101K(373568K), 0.2736860 secs] [Times: user=0.01 sys=0.01, real=0.28 secs]

Tried to downgrade to jetty8 with different jdk combination (jdk6 / jdk7). Still got the same problem.

Tried to remove all jvm options and using "sudo java -jar start.jar" to run jetty. Still crash.

Any other way to shoot the problem?

Ken Tsang
  • 11
  • 3

3 Answers3

1

Lots of topics here..

First, what version of Jetty 9? (be specific! edit your question with these details)

The dumpBeforeStop=TRUE you mention is not a known configuration of Jetty. If you are meaning the startup property jetty.dump.stop=true then that is for dumping the state of the server + handler tree during a formal / graceful shutdown. It has nothing to do with server memory or server crashes.

If you want to see the server and handler tree dump, without stopping the server, then you can use either the startup property jetty.dump.start=true or enable JMX and access the org.eclipse.jetty.server:type=server,id=0 MBean and use the dump() operation.

OutOfMemoryError can occur for various reasons. (You didn't paste the full error message and stacktrace to narrow down the cause). It can occur from insufficient heap, or permgen, or threads, or file descriptors, etc ... Without the extra information from the OutOfMemoryError error message advising you on what path to look at is nigh impossible.

The GC event logs provide far too little of a view as to what is going on. You could have had a single action that attempted to allocate 4GB, that wouldn't show up on GC, but still cause the OutOfMemoryError. You could have a scenario where the server attempted to allocate a new Thread, but the OS prevented it, that would also cause an OutOfMemoryError: Failed to Create Thread

Switching Jetty or JVM versions will have no effect for a OutOfMemoryError.

"Removing all JVM options" can also have no effect, depending on how you have your Jetty configured, which you have not specified in your question.

Depending on your specific version of Jetty, the startup is different. (eg: Jetty 7/8/9.0 vs Jetty 9.1)

Depending on your specific installation technique of Jetty, your startup can be different. (eg: standalone vs embedded, from official jetty-distribution, from linux distribution/packaging, from cloud provider packaging, isolated vs unixfied directory structure, split jetty.home vs jetty.base, service startup vs shell vs cron, shell script or java command line, no start.ini vs start.ini and/or start.d, etc...)

In short, your question is valid but vague, the number of possible paths for advice for you is too great (based on the limited information you have provided)

joakime
  • 153
  • 5
1

Finally I solved this problem by adding swap memory.

The default AMI from amazon t1.micro instance has no swap memory, I follow this post to create a 1G swap space. The jvm can up and running over a week.

Ken Tsang
  • 11
  • 3
0

Thanks to your quick answer. I tried jetty-9.0.6, jetty-9.0.5 and jetty-8.1.14. All of the above got the same problem.

dumpBeforeStop, it's valid config in jetty.xml. The default value is false, I agree it is nothing do with server crashes if it works only when graceful shutdown. OutOfMemoryError is not the case here. I turn on -XX:-HeapDumpOnOutOfMemoryError. No dump file is generated when crashes.

I tried removing all jvm options because I want to use a clean default settings to run jetty to see any differences.

The problem right now is, the jvm crashes randomly without any hs file or any useful error message. If something cannot be handled by jetty, there should be error in stderr / stdout.

If something cannot be handled by the jvm, there should be a hotspot file. Sadly it just crash silently.

I tried turn on jetty debug mode last night and found it crash again this monring. It is the last few lines of log before crash:

2013-11-13 06:26:00.891:DBUG:oejsh.ContextHandler:scope /||/article.jsp @ o.e.j.w.WebAppContext{/,file:/tmp/jetty-0.0.0.0-8080-put2012.war--xxx.xx-/webapp/,xxx.xx},/home/ec2-user/jetty/webapps/put2012.war 2013-11-13 06:26:00.891:DBUG:oejsh.ContextHandler:context=||/article.jsp @ o.e.j.w.WebAppContext{/,file:/tmp/jetty-0.0.0.0-8080-put2012.war--xxx.xx-/webapp/,xxx.xx},/home/ec2-user/jetty/webapps/put2012.war 2013-11-13 06:26:00.891:DBUG:oejs.session:sessionManager=org.eclipse.jetty.server.session.HashSessionManager@1acc0e01 2013-11-13 06:26:00.891:DBUG:oejs.session:session=null 2013-11-13 06:26:00.891:DBUG:oejs.ServletHandler:servlet |/article.jsp|null -> jsp 2013-11-13 06:26:00.891:DBUG:oejs.ServletHandler:chain=null 2013-11-13 06:26:00.894:DBUG:oejs.session:new session & id 1hva53vl2jfs9m6voqnqdamyj 1hva53vl2jfs9m6voqnqdamyj 2013-11-13 06:26:01.885:DBUG:oejw.WebAppClassLoader:loaded class com.sun.mail.handlers.text_plain from WebAppClassLoader=put2012@3ef07355 2013-11-13 06:26:01.885:DBUG:oejw.WebAppClassLoader:loaded class com.sun.mail.handlers.text_plain from WebAppClassLoader=put2012@3ef07355

You can see there is no hints why it crashes.

Without the corpse, I have nothing to inspect why it was killed.

I tried thread dump by jstack and nothing abnormal. It will be only useful when dumping thread at the moment of crash.

Ken Tsang
  • 11
  • 3