The same storage system that gave us problems on Wednesday hung again Friday evening. This time we did not lose any nodes. We know what is causing the issue now and are working with the vendor on a software fix. Running jobs had time added to compensate for the hang. Last Updated Friday, Jan 17 06:50 pm 2020

STAR-CCM+

The license for running STAR-CCM+ is provided through the College of Engineering. To request a starccm+ license as part of your SLURM job use the licenses feature of sbatch and salloc:

sbatch --licenses=starccm_ccmpsuite:1 …

Alternatively use a comment in a bash script:

#SBATCH --licenses=starccm_ccmpsuite:1

module load starccm+
# To load a specific version: module load starccm+/9.02.007

To see all available versions:

module avail starccm+
Putting it together in an example:
#SBATCH --licenses=starccm_ccmpsuite:1

module load starccm+/9.02.007

echo "Machinefile:"
machinefile=$(/fslapps/fslutils/generate_mpich_machinefile)
cat "$machinefile"
echo "----"

starccm+ \
    -mpi intel \
    -mpiflags "-bootstrap slurm" \
    -np $SLURM_NTASKS \
    -machinefile "$machinefile" …

			rm "$machinefile"

Please do not use the Platform MPI or OpenMPI drivers. In the past we recommended using the OpenMPI driver, but we had issues with it binding to invalid processor ids.