17

I want to run a ppp connection when my USB modem is connected, so I use this udev rule:

ACTION=="add", SUBSYSTEM=="tty", ATTRS{idVendor}=="16d8",\
    RUN+="/usr/local/bin/newPPP.sh $env{DEVNAME}"

(My modem appears in /dev as ttyACM0)

newPPP.sh:

#!/bin/bash
/usr/bin/pon prov $1 >/dev/null 2>&1 &

Problem:

The udev event fires, and newPPP.sh is running, but newPPP.sh process is killed after ~4-5s. ppp does not have time to connect (its timeout is 10s for dial up).

How can I run a long time process, that will not be killed?

I tried using nohup, but it didn't work either.

System: Arch Linux

Update

I found a solution here, thanks to maxschlepzig.

I use at now to run my job detached from udev process.

But the one question remains unanswered: Why do nohup and & not work?

6 Answers6

11

If you run a decent distribution with systemd support, the easiest and technically safest way is to use a device unit.

This way, systemd will be in full control of the long-running script and will even be able to properly terminate the process once the device is shutdown/removed - detaching the process means you're giving up being in full control of the process state and its history.

Besides that, you'll be able to inspect the status of the device and it's attached service by running systemctl status my-ppp-thing.device.

See also this blog post for some more examples and details.

Elias Probst
  • 1,053
7

Nowadays udev uses cgroups to seek-n-destroy spawned tasks. One solution is to use "at now" or "batch". Another solution is to do double fork and "relocate" process to another cgroup. This is an example python code (similar code can written in any language):

os.closerange(0, 65535)  # just in case
pid = os.fork()
if not pid:
  pid = os.fork()  # fork again so the child would adopted by init
  if not pid:
    # relocate this process to another cgroup
    with open("/sys/fs/cgroup/cpu/tasks", "a+") as fd:
      fd.write(str(os.getpid()))
    sleep(3)  # defer execution by XX seconds
    # YOUR CODE GOES HERE
sleep(0.1)  # get forked process chance to change cgroup

Debug output can be sent to, e.g., syslog.

  • 5
    Why would udev go to such lengths to destroy spawned processes? – user30747 Nov 02 '18 at 16:13
  • I'm guessing it's because programs started by udev block the daemon (e.g. with a udev rule connected to plugging in external display, a long-running program will prevent the new display from actually being used). I'm sure that has its own technical reasoning behind it, but means that spawned processes can hold up major parts of the system and need to be killed. – tobek Jun 06 '19 at 00:26
  • @tobek If it was just about unblocking the daemon, there'd be no reason to go after detached processes. Actively hunting down children using cgroups goes far beyond that need. – 1N4001 Dec 08 '22 at 15:31
  • I've had positive results using setsid --fork as a sh-based solution to forking. – 1N4001 Dec 08 '22 at 15:31
2

I got it to work with setsid. My RUN part of the udev rule:

RUN+="/bin/bash script.sh"

then in the script:

#!/bin/bash
if [ "$1" != "fo_real" ]; then
  /usr/bin/setsid $(/usr/bin/dirname $0)/$(/usr/bin/basename $0) fo_real &
  exit
fi

Rest of script is here....

The first call to the script returns with exit status 0, but the second call to the script keeps running with PPID = 1.

Alives
  • 121
2

Shell has ability to run commands in background:

(

lots of code

) &

Commands grouped by the braces with ampersand after them will be executed asynchronously in a subshell. I use this to autoconnect when a USB modem is inserted and switched. It takes about 20 seconds and works fine under udev.

  • You might want to redirect stderr, stdout, and stderr in such a situation. – mdpc Jul 02 '13 at 18:37
  • @mdpc hmm... why? I saw usb_modeswitch closes streams in this scenario: exec 1<&- 2<&- 5<&- 7<&- – user42295 Jul 04 '13 at 10:38
  • @mdpc : Why? In the context of the Question, this is run in a udev +RUN, so there is neither terminal nor shell, so there is nothing to redirect. – Eric Towers Jun 23 '21 at 03:36
0

Not sure why it's killed but I found this to work:

RUN+="/bin/bash -c 'YOUR_SCRIPT &'"

Seems that Bash itself goes to background when it has only background jobs left.

AdminBee
  • 22,803
glems2
  • 1
-1

But the one question remains unanswered: Why do nohup and & not work?

Probably because its parent process is terminated and the termination signal propagates to its children, which don't block it (and in case of SIGKILL they even can't).

You can add a signal handler to your script and log signals received. For shell scripts you can use the trap shell built-in.

peterph
  • 30,838