Background Workflow |
Site /
RunningAmber18NOTE: Amber18 is optimally run on the exx96 queue, or on the helmholtz/gibbs machines. Submitting Jobs Short jobs (taking less than 10 minutes to run) can be run on the cottontail head node. This includes many of our python scripts. No header is needed.sh jobname.sh To submit a job that will take longer than this to the scheduler, a header is required (see below). Energy minimization and cpptraj will need the CPU header. Heating, equilibration, piggybacking, and dynamics will need the GPU header. To submit to the queuing system, enter into the terminal: bsub < jobname.sh It is acceptable to submit several jobs as a batch, but it is good practice to wait 1-2 minutes between submissions. For sequentially named jobs, this can be done with a loop: for i in {1..5} do bsub < job_$i.sh sleep 60 done exx96 CPU Header #!/bin/bash #BSUB -e err #BSUB -o out #BSUB -q exx96 #BSUB -J "jobname_CPU" #BSUB -n 1 # env export PATH=/home/apps/CENTOS7/amber/amber18/bin:$PATH export LD_LIBRARY_PATH=/home/apps/CENTOS7/amber/amber18/lib:/home/apps/CENTOS7/amber/amber18/lib64:$LD_LIBRARY_PATH export PATH=/share/apps/openmpi/1.4.4+intel-12/bin:$PATH export LD_LIBRARY_PATH=/share/apps/openmpi/1.4.4+intel-12/lib:$LD_LIBRARY_PATH # cd to working directory cd /mindstore/home33ext/kscopino/mR146/5JUP/COD1_A2/EMIN # call Amber18 sander.MPI -O -i inputs.in -p prmtop_wat.prmtop -c coordinates_in.rst -r coordinates_out.rst -ref restraint_reference.rst -o energy_info.out exx96 GPU Header #!/bin/bash #BSUB -e err #BSUB -o out #BSUB -q exx96 #BSUB -J jobname_GPU #BSUB -n 1 #BSUB -R "rusage[gpu4=1:mem=6288],span[hosts=1]" # cuda export CUDA_HOME=/usr/local/n37-cuda-9.2 export PATH=/usr/local/n37-cuda-9.2/bin:$PATH export LD_LIBRARY_PATH=/usr/local/n37-cuda-9.2/lib64:$LD_LIBRARY_PATH export LD_LIBRARY_PATH="/usr/local/n37-cuda-9.2/lib:${LD_LIBRARY_PATH}" # openmpi export PATH=/share/apps/CENTOS6/openmpi/1.8.4/bin:$PATH export LD_LIBRARY_PATH=/share/apps/CENTOS6/openmpi/1.8.4/lib:$LD_LIBRARY_PATH # amber18 source /share/apps/CENTOS7/amber/amber18/amber.sh # cd to working directory cd /mindstore/home33ext/kscopino/mR146/5JUP/COD1_A2/EMIN # call Amber18 sander.MPI -O -i inputs.in -p prmtop_wat.prmtop -c coordinates_in.rst -r coordinates_out.rst -ref restraint_reference.rst -o energy_info.out n37.openmpi.wrapper pmemd.cuda -O -i inputs.in -p prmtop_wat.prmtop -c coordinates_in.rst -r coordinates_out.rst -ref restraint_reference.rst -o energy_info.out -x trajectory.mdcrd Useful Troubleshooting Commands # tells you the status of your jobs and which node they are running on bjobs # tells you all of the jobs on a given queue or for a given user bjobs -u all | grep queuename_or_username # tells you the status of all queues, useful for finding open queues bqueues # gives GPU identity (in the form of 00000000:3B:00.0) for each job on node 85, want 1 job per GPU ssh n85 gpu-process # gives information on % usage of each GPU (0 to 3) on node 85 ssh n85 gpu-info # gives information on % usage of CPU for node 85 lsload n85 Suspending and Un-suspending Jobs # note that JOBPID can be found with the bjobs command # to pause a job bstop JOBPID # to continue a paused job bresume JOBPID Additional Resources |