2

the hosting provider of my KVM (Strato) recently upgraded the version of Virtuozzo they were using from 6 to 7. Since then a few of my docker containers fail to start with the following error message:

❯ sudo docker start spring_webapp

Error response from daemon: 
OCI runtime create failed: container_linux.go:349: starting container process caused
"process_linux.go:297: applying cgroup configuration for process caused 
\"failed to set memory.kmem.limit_in_bytes, 
because either tasks have already joined this cgroup 
or it has children\"": unknown

Error: failed to start containers: spring_webapp

The only thing the containers that won't start seem to have in common is that they contain a java webapp. Other containers like gitlab or a few mariadb instances seem to start just fine.

A google search for the error message returned some older issues on github but those seem to have been fixed years ago: https://github.com/opencontainers/runc/issues/1083

I am running Ubuntu 18.04 LTS with docker 19.03.12.

Already opened a ticket with my hosting provider but they answered with a canned response that since I have a root server they can't help me, basically saying that it's a problem with my configuration.

Unfortunately I don't know enough about OpenVZ/Virtuozzo to refute them.

Any hint pointing me in the right direction would be highly appreciated :)

Here's the output of /proc/user_beancounters

❯ sudo cat /proc/user_beancounters
         
Version: 2.5
       uid  resource                     held              maxheld              barrier                limit              failcnt
    831745: kmemsize                564412416            766132224  9223372036854775807  9223372036854775807                    0
            lockedpages                     0                   16  9223372036854775807  9223372036854775807                    0
            privvmpages               5148863              5444666  9223372036854775807  9223372036854775807                    0
            shmpages                    41758                74651  9223372036854775807  9223372036854775807                    0
            dummy                           0                    0  9223372036854775807  9223372036854775807                    0
            numproc                       919                  919                 1100                 1100                    0
            physpages                 1972269              2444334              8388608              8388608                    0
            vmguarpages                     0                    0  9223372036854775807  9223372036854775807                    0
            oomguarpages              2104451              2452167                    0                    0                    0
            numtcpsock                      0                    0  9223372036854775807  9223372036854775807                    0
            numflock                        0                    0  9223372036854775807  9223372036854775807                    0
            numpty                          4                    4  9223372036854775807  9223372036854775807                    0
            numsiginfo                     16                  129  9223372036854775807  9223372036854775807                    0
            tcpsndbuf                       0                    0  9223372036854775807  9223372036854775807                    0
            tcprcvbuf                       0                    0  9223372036854775807  9223372036854775807                    0
            othersockbuf                    0                    0  9223372036854775807  9223372036854775807                    0
            dgramrcvbuf                     0                    0  9223372036854775807  9223372036854775807                    0
            numothersock                    0                    0  9223372036854775807  9223372036854775807                    0
            dcachesize              353423360            552648704  9223372036854775807  9223372036854775807                    0
            numfile                      6327                11230  9223372036854775807  9223372036854775807                    0
            dummy                           0                    0  9223372036854775807  9223372036854775807                    0
            dummy                           0                    0  9223372036854775807  9223372036854775807                    0
            dummy                           0                    0  9223372036854775807  9223372036854775807                    0
            numiptent                     219                  222                 2000                 2000                    0

fiendie
  • 21
  • 4
  • 2
    Do yourself a huge favor and stay far away from Virtuozzo/OpenVZ. It is itself a container platform, and your so-called "root server" is actually a container. It is therefore not well suited for running containers and you're pretty much guaranteed to have problems. Your provider's support (or rather, lack thereof) also doesn't inspire confidence. It worked before because Virtuozzo 6 used their own custom containers, while 7 uses standard Linux containers. – Michael Hampton Jul 16 '20 at 17:52
  • Hey there, I'm having the same issue. Same setup at Strato. They were doing maintenance this night and since then docker's broken... might need to contact support. – Stefan Schmid Jul 17 '20 at 05:50
  • @MichaelHampton: I know, I know, but migrating all my stuff over to another provider with all my TLDs would be a huge hassle :) At least now I know what changed between 6 and 7, thanks! – fiendie Jul 17 '20 at 06:02
  • You don't have to migrate everything all at once. You could start by migrating the app having the problem. Everything else can probably wait. – Michael Hampton Jul 17 '20 at 12:00
  • Run into the same issue with my strato server after their "maintenance". Great they got it broken, really hope someone has a fix/workaround. – TSGames Jul 18 '20 at 12:57
  • Current status regarding to strato is that they're still examining this issue. Does anyone had success again with their vServer since there maintenance? – TSGames Aug 05 '20 at 20:20

1 Answers1

1

Yep, thats a Strato issue... happened after their maintenance 15./16. July 2020. I have already submitted a ticket... lets see if there is a response.

Needless to say, Strato had major performance problems in the past 3..4 months and just got that fixed recently.

Now this new maintenance last week messed up everything again.

EDIT:

Just deleted the container(s) from my server and reinstalled them. Now all is working well again. Maybe thats a solution for others, too?

MSZ
  • 11
  • 2
  • Not sure what you mean be "reinstall" but someone suggested removing the -m option after docker run that limits the RAM usage of the container. That worked for me in the sense that the container starts again but I ran into problems with the Java app inside the container. But that's a different issue :) – fiendie Jul 19 '20 at 13:25