1

Might be idiotic, but I'm running a command that completes very fast, so I don't have enough time to lsof it from another window and see which files is it holding open. Is there some obvious way to run a command and immediately attach lsof to it? (and preferably to continue tracking which files is it opening until it completes)

edit - I can also be happy with attaching lsof to the process after it shuts down, I don't have to see the files it is opening in real time

ihadanny
  • 111
  • 1
    I’m suggesting the above duplicate because its top-voted answer shows how to log file accesses in general. lsof doesn’t seem like the best solution for you... – Stephen Kitt Oct 08 '18 at 09:21

4 Answers4

2

In such cases lsof may not be the most practical tool. I would use strace in this case. E.g. to see what files the ls command opens during its short running time:

strace -e trace=open ls
Hkoof
  • 1,667
0

You can use the command watch in one window, for example:

watch -d 'lsof| `fgrep cron`'

The command refreshes the window ever two seconds; you can decrease the time whit the -n option see in more detail in man watch.

The | fgrep cron pipe helps you to only see the result that you are searching for.

And when you use the command in the other window, you will see the files and user associated whit the cron command in this case. Change the cron command by your specific command or program.

0

You may consider using 'strace' : it will show you the system calls from the begining of execution, thus including file openings.

tonioc
  • 2,069
0

You may can monitor all the files opened in real time with the command:

sudo sysdig -p "%12user.name %6proc.pid %12proc.name %3fd.num %fd.typechar %fd.name" evt.type=open

For a process/command name in particular (not a script), you can do: (e.g. example as apache2)

sudo sysdig -p "%12user.name %6proc.pid %12proc.name %3fd.num %fd.typechar %fd.name" evt.type=open and proc.name=apache2
Rui F Ribeiro
  • 56,709
  • 26
  • 150
  • 232