My problem
I need to find out how much resources are required by a bash script I wrote. I'm mainly interested in the following:
RAM
peak disk usage (disk space)
CPU time: while here I'm not sure to what this exactly refers to the execution time on whole CPU (including physical + virtual cores?) or just the cores which where used. For example, how do I know how much CPU time is required on a CPU with a different number of cores?
used number of cores
The reason for this is, that I will submit jobs to a cluster. Where each job executes my script with slightly changed parameters.
My plan is the following, I execute the script on my pc to get the peak values of the above resources and then make sure that only cluster nodes are used which have the required amount of resources ready.
Or even better during script execution I write the desired stuff to some file.
Attempted solution
What I did so far:
RAM: measured the peak RAM of the called program in my script from which I guessed it has the maximum RAM usage. (This was a Mathematica
Script where I used the command MaxMemoryUsed[]
which I later extract from the log file)
Disk Space: I basically sum over all files / folders using du -sb
. Moreover, I also use df
before and after the script.
CPU time: at the beginning of the script I set SECONDS=0
and echo ${SECONDS}
at the end (Since the script is the only thing I am running this should correspond to the CPU time?!). I also used date
to get a rough estimation.
I'm thankful for all hints, comments and pointers to tools and possible solutions to my problem.
Edit
I also heard about valgrind
. Does anybody have some experience with that. Is it possible to use this for my purposes?
time
command – fooot Dec 20 '16 at 16:29