4

To test data throughput, I want to 1) for X seconds 2) send random data 3) over TCP, and 4) to know afterwards exactly how many bytes were transmitted.

My best attempt (which is not much)

1) timeout X
2) dd if=/dev/urandom count=65535 bs=1500
3) > /dev/tcp/<host>/<port>
4) ... ? can I use wc -c? Failing that, maybe pipe my random data through tee into both a file and /dev/tcp, then, when the timeout is over, count the byte size of the file?

Can anyone provide an elegant bash command to do this?


[Update] this is for a tailored version of Linux. Not all commands may be available, for security porpoises. I will check all suggestions & get back to you ASAP.

1 Answers1

6

Since you asked for a solution I'll present you with the one I mentioned and the one @SatoKatsura mentioned. First things first, generating random network load isn't usually the most useful way of going about load testing. Usually you need to recreate a realistic state of high workload. Throwing random data down the pipe can still make sense, though, if you're just trying to find stuff that relates to another workload's performance under any sort of competing load.

The most direct line to getting what you want from what you were mentioning is what I was mentioning in the comments with nc. Set up the receiving end so that it listens on some random port and redirects output to /dev/null:

[root@listeningServerFQDN ~]# nc -l listeningServerFQDN 1023 >/dev/null

Then on a client use nc again to send your /dev/urandom data to the remote end:

[root@transmit ~]# dd if=/dev/urandom count=65535 bs=1500 | nc listeningServerFQDN 1023

After this you can use whatever tools you were thinking to use.


That's one possible solution, another is the iperf tool @SatoKatsura mentioned. This is more geared towards network engineers who just need some kind of load to be running over the network for some reason. For example if they want to test out QoS policies they're trying to implement. In that case they don't care if it doesn't represent a workload, they're just testing that bandwidth is limited appropriately.

Basic iperf usage involves setting up a server process:

[root@listeningServerFQDN ~]# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------

Then running your test on the client:

[root@transmit ~]# iperf -c listeningServerFQDN -r
bind failed: Address already in use
------------------------------------------------------------
Client connecting to transmit.example.com, TCP port 5001
TCP window size: 2.50 MByte (default)
------------------------------------------------------------
[  4] local 10.762.40.95 port 54610 connected with 10.762.40.95 port 5001
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec  37.1 GBytes  31.8 Gbits/sec

which is duplicated on the server instance which appends the following to my output:

[  4] local 10.762.40.95 port 5001 connected with 10.762.40.95 port 54610
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec  37.1 GBytes  31.7 Gbits/sec
------------------------------------------------------------
Client connecting to 10.762.40.95, TCP port 5001
TCP window size: 4.00 MByte (default)
------------------------------------------------------------
[  4] local 10.762.40.95 port 54640 connected with 10.762.40.95 port 5001
[  5] local 10.762.40.95 port 5001 connected with 10.762.40.95 port 54640
[  4]  0.0-10.0 sec  37.4 GBytes  32.1 Gbits/sec
[  5]  0.0-10.0 sec  37.4 GBytes  32.1 Gbits/sec

You can branch out from there obviously, but you get the general idea and can look at the man page for everything else.


My two cents: I would actually stick with the nc solution if your criteria is really just "send random data down the pipe" nc is a generally useful tool that you can use for doing more than just this one thing and I'd suspect the use case for iperf is pretty narrow.

I would use nc (or whichever tools you're more comfortable with) for rudimentary testing then graduate to simulating actual load rather than going to iperf which is another "random data down the pipe" test.

Bratchley
  • 16,824
  • 14
  • 67
  • 103
  • 2
    One use case for iperf is to determine the optimal window size. You'd then set the corresponding sysctls accordingly. FWIW. On an unrelated topic: can you guarantee that reading from /dev/urandom takes no time at all? Always, regardless of load? :) – Satō Katsura Dec 13 '16 at 17:56
  • 1
    @SatoKatsura No, that's a good point, so maybe their use of /dev/urandom directly wasn't a good choice. Maybe save it to a file or something first. Or just use /dev/zero since it still has to transmit over the TCP connection and it's not like it de-dupes or compresses or anything. – Bratchley Dec 13 '16 at 18:14
  • 1
    But the window is usually set by the congestion control algorithm isn't it? Unless you mean ssthresh I mean. Even then the various congestion control algorithms have pretty well established behaviors and it should be obvious which one you want. Although I guess iperf could still help with that. Simulating workload should probably be the go-to method of testing, though. – Bratchley Dec 13 '16 at 18:18
  • 1
    But the window is usually set by the congestion control algorithm isn't it? - Only if you enable TCP window scaling, and the corresponding TCP options survive going through the relevant routers. Also, on some OSes you can tune initial window size and various buffer sizes. – Satō Katsura Dec 13 '16 at 18:33
  • 1
    The TCP congestion window only exists on the sending node and the receive window only exists on the receiving node. window scaling is referring to the receive window which isn't usually the bottleneck at all since the traffic has already made it through the network. The receive window would only be the bottleneck if the application was having a hard time keeping up with bursts of data. – Bratchley Dec 13 '16 at 18:44
  • 1
    You're conflating window scaling with congestion control. They are not completely unrelated, but window size is not determined by congestion control. A small window can be less efficient than a bigger one even if there is no congestion whatsoever. The OP's scenario is finding out TCP throughput in (presumably) ideal conditions. It doesn't involve congestion. – Satō Katsura Dec 13 '16 at 19:45
  • I seem to be missing point 4) how to count the vloume of data sent or received in the given time. What am I overlooking? – Mawg says reinstate Monica Dec 14 '16 at 14:36
  • @SatoKatsura 's answer seems to have been deleted. Which is a pity, as iperf is what I used – Mawg says reinstate Monica Nov 11 '19 at 06:26