1

I'm using at to schedule a script in a Centos based Docker container.

e.g. echo "bash /path/to/script.sh" | at now + 1 minute

It mostly works as expected with the exception being that it results in a zombie process every minute. I'm guessing the behaviour is related to this line in the docs:

  1. The at-job is executed in a separate invocation of the shell, running in a separate process group with no controlling terminal, except that the environment variables, current working directory, file creation mask (see umask(1)), and system resource limits .

I've seen and tried suggestions from Remove a zombie process from the process table to no avail.

Can I get the zombie ps to go away or is there an alternative way to do this that wouldn't end up with the same resulting zombie ps?

EDIT: contents of one of the scripts:

#!/bin/bash
exec 1> >(logger -s -t $(basename $0)) 2>&1 
#probe the seed node if this isn't the seed node
# set -ex
[[ `hostname` =~ -([0-9]+)$ ]] || exit 1
ordinal=${BASH_REMATCH[1]}
#if this is the first node then attempt to probe second until successful
if [[ $ordinal -eq 0 ]]; then
  while ! gluster peer probe {{gluster.service_name}}-1.{{gluster.service_name}}.default.svc.cluster.local; do sleep 2; done
fi
#if this is the second node then probe the first to create the trusted pool
if [[ $ordinal -eq 1 ]]; then
  while ! gluster peer probe {{gluster.service_name}}-0.{{gluster.service_name}}.default.svc.cluster.local; do sleep 2; done
fi
zcourts
  • 149
  • 1
  • 1
  • 7

1 Answers1

0

I forgot to post back that I worked around this by just using cron.

(crontab -l 2>/dev/null; echo "*/1 * * * * /path/to/job -with args") | crontab -

See https://stackoverflow.com/a/9625233/400048

Note, if running in Docker you need to start crond.

zcourts
  • 149
  • 1
  • 1
  • 7