Currently I am monitoring 10 hosts. I want to write a script which will calculate the disk space in every 24 hours and then if the file system of a host is critical for any host then it will move the old files from that host to the host which has the largest disk space?
Asked
Active
Viewed 1,023 times
2 Answers
1
There exist parallel ssh wrappers -- here are a few I found on CentOS7/EPEL:
Name : pssh Repo : epel/x86_64 Summary : Parallel SSH tools URL : http://code.google.com/p/parallel-ssh/ License : BSD Description : This package provides various parallel tools based on ssh and scp. : Parallell version includes: : o ssh : pssh : o scp : pscp : o nuke : pnuke : o rsync : prsync : o slurp : pslurp Name : pdsh Repo : epel/x86_64 Summary : Parallel remote shell program URL : https://pdsh.googlecode.com/ License : GPLv2+ Description : Pdsh is a multithreaded remote shell client which executes commands : on multiple remote hosts in parallel. Pdsh can use several different : remote shell services, including standard "rsh", Kerberos IV, and ssh. Name : mpssh Repo : epel/x86_64 Summary : Parallel ssh tool URL : https://github.com/ndenev/mpssh License : BSD Description : mpssh is a parallel ssh tool. What it does is connecting to a number of hosts : specified in the hosts file and execute the same command on all of them
So you could use that script to query the remote systems over ssh to get their disk space, and then evaluate which to clean up, and which to use as a destination.

rrauenza
- 734
0
This will run a script on the specified list of hosts in parallel:
declare -a CLEANUP
for host in host1.example.com host2.example.com; do # the list can be as long as you need
OUTPUT="/tmp/${host}.out"
ssh "$host" '/path/to/script' > "$OUTPUT" &
CLEANUP+=("$OUTPUT")
done
trap 'rm -fr "${OUTPUT[@]}"' EXIT
# How to parse the output files is an exercise I leave to you

DopeGhoti
- 76,081