10

I have a binary that creates some files in /tmp/*some folder* and runs them. This same binary deletes these files right after running them. Is there any way to intercept these files?

I can't make the folder read-only, because the binary needs write permissions. I just need a way to either copy the files when they are executed or stop the original binary from deleting them.

dragostis
  • 203
  • I don't believe you can do this with the standard unix permissions. You may want to check man pages for setfacl and getfacl to see if anything there to help you, but I seriously doubt it Your saving grace might be setting up some sort of tripwire and watch the contents of this directory and run a cp upon detecting new files. – MelBurslan Feb 26 '16 at 14:49
  • I don't think it will work with setfacl. I was thinking of maybe coding something for this. – dragostis Feb 26 '16 at 14:52
  • I take it you do not have the source for the binary, and so cannot modify it? – Faheem Mitha Feb 26 '16 at 14:59

3 Answers3

10

chattr +a /tmp/*some folder* will set the folder to be append-only. Files can be created and written to but not deleted. Use chattr -a /tmp/*some folder* when you're done.

doneal24
  • 5,059
9

You can use the inotifywait command from inotify-tools in a script to create hard links of files created in /tmp/some_folder. For example, hard link all created files from /tmp/some_folder to /tmp/some_folder_bak:

#!/bin/sh

ORIG_DIR=/tmp/some_folder
CLONE_DIR=/tmp/some_folder_bak

mkdir -p $CLONE_DIR

inotifywait -mr --format='%w%f' -e create $ORIG_DIR | while read file; do
  echo $file
  DIR=`dirname "$file"`
  mkdir -p "${CLONE_DIR}/${DIR#$ORIG_DIR/}"
  cp -rl "$file" "${CLONE_DIR}/${file#$ORIG_DIR/}"
done

Since they are hard links, they should be updated when the program modifies them but not deleted when the program removes them. You can delete the hard linked clones normally.

Note that this approach is nowhere near atomic so you rely on this script to create the hard links before the program can delete the newly created file.

If you want to clone all changes to /tmp, you can use a more distributed version of the script:

#!/bin/sh

TMP_DIR=/tmp
CLONE_DIR=/tmp/clone
mkdir -p $CLONE_DIR

wait_dir() {
  inotifywait -mr --format='%w%f' -e create "$1" 2>/dev/null | while read file; do
    echo $file
    DIR=`dirname "$file"`
    mkdir -p "${CLONE_DIR}/${DIR#$TMP_DIR/}"
    cp -rl "$file" "${CLONE_DIR}/${file#$TMP_DIR/}"
  done
}

trap "trap - TERM && kill -- -$$" INT TERM EXIT

inotifywait -m --format='%w%f' -e create "$TMP_DIR" | while read file; do
  if ! [ -d "$file" ]; then
    continue
  fi

  echo "setting up wait for $file"
  wait_dir "$file" &
done
2

If the programs executed from /tmp are still running, you can usually still retrieve the original binary even if it is "deleted" from the filesystem, because the inode still exists with the data; the removal is just unlinking the name from the directory.

In Linux, you can access the inode's contents via the /proc/PID/exe link. Tools like ls will show you the original path and mark the link as broken (colorwise) and the listing will say something like "(deleted)" in the name. However, you can still retrieve it by reading the file.

An example showing this concept (using sleep as an illustrative tool):

$ cp /bin/sleep /tmp/otherprog
$ /tmp/otherprog 300 &
[1] 3572
$ rm /tmp/otherprog
$ ls -l /proc/3572/exe
lrwxrwxrwx 1 john john 0 Feb 27 08:54 /proc/3572/exe -> /tmp/otherprog (deleted)
$ cp /proc/3572/exe /tmp/saved
$ diff /tmp/saved /bin/sleep
$ echo $?
0

I created a "new" program by copying the contents of the sleep program to a new program called "otherprog" and ran it such that it would keep running for a while. Then I deleted the program from /tmp. Using the PID I got from the shell (you can find the PIDs of the programs you care about via ps) I looked at the exe link in /proc, then copied the contents of the file (even though target file name is gone), and checked that the contents match the original.

This of course won't work if the programs from /tmp are short-lived, because once they exit, the link count of the inode will drop to where the data is actually freed from disk.

It does avoid racing to copy the file before it is unlinked from the /tmp directory.

John O'M.
  • 231