1

We are doing a remote capturing of tcpdump in unix, Whenever the ssh is killed (any interrupt i.e. by ctrl c or any other) we need the tcpdump on the other end to be stopped/killed.

We tried most of the options listed for killing a process spawned by SSH when ssh is killed. For tcpdump ssh with -t is not feasible as its prefixing a line in the beginning which is not expected.

So if any one has already worked on such kind of problems, we would like to know some good solution to achieve this.

I am executing the tcpdump remotely as root like this:

ssh {remotehost} "tcpdump -i eth0 -s 0 -w - " > /tmp/local_file

Thanks.

Chris Davies
  • 116,213
  • 16
  • 160
  • 287
sapna
  • 11
  • 1
  • 3
  • @roaima you editing out the # took some meaning to my recommendations. It also cannot be ascertained wether the user does ssh as root because he needs to or because he thinks he has to do it, hence the recommendations – Rui F Ribeiro Apr 13 '17 at 09:01

1 Answers1

1

Running it remotely, if you kill the ssh process, the other end should die. Unless you are letting this end suspend for some reason. If that is the case, you might want to run it as part of a larger script on the background.

You also should not use root remotely, and you certainly not need root on the side doing the ssh. Avoid working as root as possible. Use a remote user with sudo capabilities.

You probably also do not need "w -" as tcpdump by default writes to stdout.

You can also limit tcpdump on the number of captured packets to control the session better. Please do note you have to exclude port 22 from the system doing ssh otherwise the remote system will capture the current session in a kind of self-feeding recursive way.

So you to capture 1000 packets you could do:

$ssh sudouser@1.1.1.1 "sudo tcpdump -i eth0 -s0 -c 1000 not port 22" > /tmp/local_file

Another less clean alternative is running afterwards a pkill:

$ssh ..."sudo tcpdump"
$ssh ..."pkill tcpdump"

As a last warning, /tmp or the root where it resides, often is in limited filesystems or in RAM; it is also a security risk to create predictable names in a /tmp directory, specifically as a privileged user. You might want to user another location for the capture file.

You also may get better results as a sysadmin investing in a tool like ansible. For automating or using more complex tools for remote admnistration, please see Linux equivalent to PowerShell's "one-to-many" remoting

Rui F Ribeiro
  • 56,709
  • 26
  • 150
  • 232
  • Many installations require tcpdump to run as root, so the remote side may well need to run as root. In corporate organisations it may be easier to arrange root equivalence between two systems than to allow a non-privileged account to ssh to a root account. And your ssh ...sudo ... >file construct will fail unless the sudo is set up to avoid asking for a password. – Chris Davies Apr 13 '17 at 08:48
  • @roaima As for root privilges, that why it is there the sudo. You should also not need root equivalence, you do in that case $ssh root@1.1.1.1 and certainly do not need root in the side initiating the ssh connection; however best practices like sudo and not using root in vain should be encouraged. root equivalence, whatever the protocol it is not the best of the security-wise ideas. – Rui F Ribeiro Apr 13 '17 at 08:59
  • 1
    This doesn’t answer the question. Clearly, “if you kill the ssh process, the other end should die” is not true, or this question would not exist. tcpdump does in fact keep running once the SSH client is terminated. Also, -w doesn’t merely redirect output; generally, that option will be -o in various programs. -w also switches to output raw packets rather than human-readable output. – Chai T. Rex May 15 '19 at 20:41
  • @ChaiT.Rex It depends on you invoke it. namely the -c does help, and also the pkill. In that, the question of killing it is covered, and I think you are mistaken Also the question is not clear whether it is needed "-w" or not. It is not answering the question or not in all counts, the point is that the question is not that clear, however the main point of killing it is covered. I usually use profusely "-c" for remote tcpdumps instead of the pkill. – Rui F Ribeiro May 15 '19 at 21:21
  • The -c imposes a limit on number of packets, so it does not accord with what the question asks, which is to have no predefined limit, which is so frequently desired as a way of running tcpdump that it is the default option. pkill is also a bad idea, as they ask how to kill it during the SSH client exit and it kills all running instances of tcpdump rather than a specific one. This is not likely to make other users of tcpdump on the remote system very happy, particularly if they need it to run continuously for some reason and they find out it stopped for some reason a few weeks ago. – Chai T. Rex May 15 '19 at 21:32
  • The question never asks for lack of limits, and in my many years of career I never had that many random users running long sessions of tcpdump, and hpe never having. . But that depends on whoever doing it using his brain and knowing what and where it is doing it too. I cannot decide whether you are serious or trolling. – Rui F Ribeiro May 16 '19 at 04:47
  • (tcpdump is not meant to be run all the time.. It is a debugging aid. There are better things for recording transactions on the long run) – Rui F Ribeiro May 16 '19 at 04:57