Changes
Page history
partition/routing update
authored
Oct 14, 2024
by
Lech Nieroda
Hide whitespace changes
Inline
Side-by-side
Documentation.md
View page @
70d08394
...
...
@@ -143,8 +143,9 @@ copy your files yourself.
There are several partitions/queues in slurm intended for general usage:
-
“smp”: default partition, with 136 smp nodes
-
“bigsmp”: partition with 8 bigsmp nodes
-
“smp”: default partition, with 136 smp nodes, for single node jobs
-
“bigsmp”: partition with 8 bigsmp nodes, for large single node jobs
-
"mpi": same nodeset as "smp" but for MPI jobs
-
“interactive”: partition with 8 interactive nodes, dedicated for interactive usage
-
“gpu”: partition with 10 gpu nodes with the following gpu types: h100:38, h100_1g.12gb:1, h100_2g.24gb:3, h100_3g.47gb:1, h100_4g.47gb:1
-
“ft-instinct”: a partition with a single node that contains two AMD Instinct MI210 GPUs
...
...
@@ -156,8 +157,12 @@ The corresponding node types are:
-
interactive: 192 cores, 1500G RAM
-
gpu: 96 cores, 1500G RAM
Without specifying a partition explicitly with the “-p” parameter, the “smp”
partition will be chosen automatically.
When a partition isn't explicitly specified with the “-p” parameter, the automatic routing mechanism determines the right partition for the job:
-
"mpi" partition:
-
when the memory specification is core oreiented (mem_per_cpu) and multiple tasks are specified
-
when multiple nodes are specified
-
"bigsmp”: when the requested memory exceeds 750GB per node
-
"smp": in all other cases
In order to get access to GPU cards, make sure to specify the “gpu” partition
as well as the type and number of GPU cards with the “-G” parameter, e.g.
...
...
...
...