I've been doing several chrooted environments in parallel, each in separate folder (when the script with chroot
exited I sometimes run another in same folder), and for a while it seemed fine. But now I have almost empty /dev
of "main" system (cannot start apps/open new windows, etc.).
I've been mounting like this:
sudo mount -t proc proc $work_path/fin_sq/proc
sudo mount -t sysfs sys $work_path/fin_sq/sys
sudo mount -t devtmpfs devtmpfs $work_path/fin_sq/dev
sudo mount -t devpts devpts $work_path/fin_sq/dev/pts
and unmounting after exiting chroot
:
umount "${work_path}"/fin_sq/dev/pts
umount "${work_path}"/fin_sq/dev
umount "${work_path}"/fin_sq/proc
umount "${work_path}"/fin_sq/sys
Occasionally I've interrupted script running in chrooted environment via ctrl-c
. For cases when the script run in a folder where there were remnants of previous run, the script is coded to umount and delete previous work.
Today I saw target is busy
at least in one terminal window where I run script with chroot
. After umount
failed the script deleted all folders of chrooted system. For the future, I think I better cancel further unmounts/deletions if one of umounts fails.
However, what do you think might cause such result? How to investigate? How to get system to working condition? I'm not rebooting the system for a while hoping to learn Linux deeper. TIA
P.S. full script(s) for those interested: script running other script via chroot, searching for /dev/pts
finds two lines (mount/umount) in the code.
"Naturally" lsof
shows empty output and adding --force
seems to make no difference. Trying mount -t devpts devpts /dev
resulted in "devtmpfs already mounted on one_of_chrooted_paths/dev".
I've read umount: target is busy, but the question is for -o rbind
, I mount w/out it. Again, it was working with two chrooted, I increased to 3-4 and something broke...