SLURM is a cluster management tool that assigns computing resources on a request basis. You can use VS code in a SLURM cluster by simply three steps:
- Apply for the GPU resources first (optional)
- Setup VS code proxy strategy
- Connect and play
Ask for GPU resource (optional)
First, we request for GPU resources. (It is optional, because without this step we could still hack in the target node and steal the target GPU. However, I DO recommend you do this step. Be considerable to other users.)
Start a tmux session on the remote machine and ask for GPU resources using srun:
[user@rootnode ~]$ srun $COMMANDS -w $SOMENODE --pty bash -i
# Define $COMMANDS and $SOMENODE as your need.
# For example we need 1 gpu on node31:
[user@rootnode ~]$ srun --gres=gpu:1 --kill-on-bad-exit=1 -w SG-IDC1-10-51-2-31 --pty bash -i
Now assume the GPU in $SOMENODE is assigned to you, the terminal will looks like:
[user@SG-IDC1-10-51-2-31 ~]$
Don’t close this session, so we can keep the resource being assigned and attach to this session.
Configure the VSCode
Open VSCode on your local PC. Open .ssh configuration file:
Ctrl+Shift+P, type in remote-ssh: open SSH configuration file and select your config file.

Edit as following. This config connects to target node `NODE75` by proxy of the main machine `SLAB`.

Now connect to NODE75 in VS code.
Check
VS code internal terminal connection status:

Now you can debug with the assigned GPU resources!
NOTICE: you can connect to node 75 without applying for GPU resources in this way. HOWEVER, it’s not recommended, it is cheating.
Other ssh software (PyCharm/XShell/PuTTy)
Above-described proxy method could be applied to any SSH-based softwares. We can bypass SLURM management system. For example, in Xshell:
Set Proxy as below:

And set IP as:
