Can't Find What You Need?
Have a Suggestion?
Browse by Category
When the job consumes more memory than requested, Out of memory error occurs.
Example: $ seff <jobid> Job ID: <jobid> Cluster: starlight User/Group: <user>/<group> State: OUT_OF_MEMORY (exit code 0) Memory Utilized: 0.00 MB (estimated maximum) Memory Efficiency: 0.00% of 12.00 GB (2.00 GB/core) |
---|
When the job utilizes more time than the requested time in the submitted job file, time limit error occurs. Let’s consider this submit job file, where I have requested one minute of time for the job, but a sleep time of 5minutes after the job ends. Submit Job Script #!/bin/bash # The job name #SBATCH --time=1:00 #SBATCH --ntasks=1 #SBATCH --mem=1G #SBATCH --partition=Orion sleep 300 Job Output file $ more /users/slurm-jobs/examples/date/slurm-1019828.out slurmstepd: error: *** JOB 1019828 ON str-c139 CANCELLED AT 2021-08-02T09:26:38 DUE TO TIME LIMIT *** |
---|
When there is no space in the working directory, slurm jobs might fail. Please ensure the dependent directory holds enough memory and space |
---|
If you would like to know how much RAM your compute job is using on the cluster, consider adding the prologue/epilogue into your submit script, which will give you additional job information in your job output log.
There is no content with the specified labels