1

I have somewhat maybe little bit strange question running my jobs in my ubuntu laptop instead of running them hpc clusters.

The reason I want to to that is now I have 6 core 12 threads. I used to run my jobs on hpc cluster of my previous university.

What I was doing,

1) connecting to hpc cluster and go to where the execution file is which is wave_func.out

submitting the jobs to lsf

3) bsub -q linux22 -i ./w1.in -o ./w1.out ./wave_func.out

Then the if there is available thread in 'redhat7_211' my simulation starts.

My question is, How can I perform these simulation on my personal ubuntu OS pc ?

Alexander
  • 113

1 Answers1

1

Assuming the following:

  • wave_func.out is an executable binary. As opposed to, say, a data file.
  • Its installed on your ubuntu pc, in the current working directory.
  • The input file w1.in is in the current directory.
  • The software's license allows you to run it on your pc.
  • all its dependencies are available on your ubuntu pc.
  • your account on the ubuntu pc uses bash as the shell

Then the following command should work.

./wave_func.out < ./w1.in > ./w1.out 2>&1

Good luck!

  • Thanks for the answer. Your assumptions all correct. I finally executed the file. One question though what does 2>&1 do? – Alexander Oct 01 '18 at 18:23
  • 1
    2>&1 redirects the standard error stream to the same file used for the standard output stream. See here for more details. – Michael Closson Oct 02 '18 at 02:39
  • thank you very much for the information. It helped a lot understanding things better. If I may, I would like to ask some questions to about how execute multiple w1.in files in parallel. Let's say I want to execute multiple files such as w1.in, w2.in and w3.in, what would be the best way to achieve this ? On top of that, are these jobs will be running parallel such that my 6 core 12 thread machine dedicate 3 core for this ? – Alexander Oct 05 '18 at 03:10