0

I'm on MacOS 10.9.4. I want to put sftp to background to automatically push some files to it through named pipe later, as suggested here. It works fine when I enter commands manually from bash prompt (using cat as listener job for simplicity):

    $ mkfifo test
    $ cat > test &
    [1] 60765
    $ cat test | cat &
    [2] 60781 60782
    [1]  + 60765 suspended (tty input)  cat > test
    $ echo works! > test
    works!
    $ ps -ax | grep 60765
    60765 ttys023    0:00.00 cat
    60900 ttys023    0:00.00 grep 60765

However, when I put this in bash script it stops working:

    $ cat test.sh
    mkfifo test1
    cat > test1 &
    echo $!
    cat test1 | cat &
    $ bash test.sh
    60847
    $ echo fails > test1
    ^C%
    $ ps -ax | grep 60847
    60882 ttys023    0:00.00 grep 60847

The problem here, as I understand, is that cat > test1 & line works fine when run from prompt, but somehow terminates when run from script, so my listener job receives EOF and terminates too.

What am I missing here, and how can I make this to work from script?

Edit: The actual problem I'm facing is this. For development I have to deploy code to remote server. For this I used rsync, and to automate this a little I used fswatch to listen for file changes in a folder and run rsync when that change happens.

    $ fswatch -0 . | while read -d "" event;
      do
        rsync ./ {remote folder}
      done

It worked fine until I tried to use it on slow connection with big latency. Rsync each time opens new ssh connection and finds file differences which takes time on slow connection. I'm trying to work around this by opening persistent connection with sftp and pushing to it only changed file, which name I receive from fswatch. For this to work I need a way to start sftp process and send commands to it later, when fswatch event occurs. I've found this question, but writing to /proc/{pid}/fd/0 didn't suppose to work on mac, so I was trying to use answer with named pipes. I can run cat > test1 & manually before starting fswatch script, so this will actually work. But I want a reliable solution to be able to give this script to my coworkers.

  • This looks like a contrived example. I would suggest that you explain what you are actually trying to do. – Michael Hampton Sep 13 '14 at 16:54
  • @MichaelHampton Looks more like a mistake than a contrived example. Having a background process try to read from the terminal is an easy mistake to make. Throwing semantics of named pipes on top of that, and it becomes non-obvious how it is going to behave. – kasperd Sep 13 '14 at 16:56

1 Answers1

4

All of this is related to what happens when a background process tries to read from a terminal. By default only the active process group is allowed to read from the terminal. If a process outside the active process group try to read from the terminal, a signal is sent to suspend that process until it is woken up by the shell.

In your first example you are starting two process groups. Each is started in the background, so neither is allowed to read from the terminal.

cat > test & will try to read from the terminal immediately and get suspended. However you are only notified about this by bash just before the next prompt is displayed, so you have to type another command before you are notified.

Your echo command writes to the pipe being read by the second process group (which is not suspended). At the end of that sequence, the first cat command remains suspended and did never get woken up again.

In your second example, the entire script is run in a single process group. So at this point you have three different cat commands in a process group, and one of them is blocked on reading from the terminal. Then you return to the initial bash shell, so that thread group has to be suspended.

Moreover from the viewpoint of your initial bash shell, that thread group is already terminated, because it saw the second bash command that you typed terminate. The initial bash shell has no knowledge that the child process spawned grand children, and that one of them was blocked reading from the terminal.

When the process group of your script is no longer under control by the initial bash shell, the first cat command will get EOF on its input. At this point all the cat commands will see an empty input and finish processing immediately.

kasperd
  • 29,894
  • 16
  • 72
  • 122