Since you asked for a solution I'll present you with the one I mentioned and the one @SatoKatsura mentioned. First things first, generating random network load isn't usually the most useful way of going about load testing. Usually you need to recreate a realistic state of high workload. Throwing random data down the pipe can still make sense, though, if you're just trying to find stuff that relates to another workload's performance under any sort of competing load.
The most direct line to getting what you want from what you were mentioning is what I was mentioning in the comments with nc
. Set up the receiving end so that it listens on some random port and redirects output to /dev/null
:
[root@listeningServerFQDN ~]# nc -l listeningServerFQDN 1023 >/dev/null
Then on a client use nc
again to send your /dev/urandom
data to the remote end:
[root@transmit ~]# dd if=/dev/urandom count=65535 bs=1500 | nc listeningServerFQDN 1023
After this you can use whatever tools you were thinking to use.
That's one possible solution, another is the iperf
tool @SatoKatsura mentioned. This is more geared towards network engineers who just need some kind of load to be running over the network for some reason. For example if they want to test out QoS policies they're trying to implement. In that case they don't care if it doesn't represent a workload, they're just testing that bandwidth is limited appropriately.
Basic iperf
usage involves setting up a server process:
[root@listeningServerFQDN ~]# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
Then running your test on the client:
[root@transmit ~]# iperf -c listeningServerFQDN -r
bind failed: Address already in use
------------------------------------------------------------
Client connecting to transmit.example.com, TCP port 5001
TCP window size: 2.50 MByte (default)
------------------------------------------------------------
[ 4] local 10.762.40.95 port 54610 connected with 10.762.40.95 port 5001
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 37.1 GBytes 31.8 Gbits/sec
which is duplicated on the server instance which appends the following to my output:
[ 4] local 10.762.40.95 port 5001 connected with 10.762.40.95 port 54610
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 37.1 GBytes 31.7 Gbits/sec
------------------------------------------------------------
Client connecting to 10.762.40.95, TCP port 5001
TCP window size: 4.00 MByte (default)
------------------------------------------------------------
[ 4] local 10.762.40.95 port 54640 connected with 10.762.40.95 port 5001
[ 5] local 10.762.40.95 port 5001 connected with 10.762.40.95 port 54640
[ 4] 0.0-10.0 sec 37.4 GBytes 32.1 Gbits/sec
[ 5] 0.0-10.0 sec 37.4 GBytes 32.1 Gbits/sec
You can branch out from there obviously, but you get the general idea and can look at the man page for everything else.
My two cents: I would actually stick with the nc
solution if your criteria is really just "send random data down the pipe" nc
is a generally useful tool that you can use for doing more than just this one thing and I'd suspect the use case for iperf
is pretty narrow.
I would use nc
(or whichever tools you're more comfortable with) for rudimentary testing then graduate to simulating actual load rather than going to iperf
which is another "random data down the pipe" test.
nc
to send and receive the data then on the transmiting system just send yourdd
to stdout and pipe it into thenc
. – Bratchley Dec 13 '16 at 14:36iperf
that can measure TCP throughput. These go to great length to avoid a myriad of pitfalls inherent to such measurements. – Satō Katsura Dec 13 '16 at 16:02iperf
is what I used – Mawg says reinstate Monica Nov 10 '19 at 08:19