Skip to content

RStudio

Overview

Running RStudio Server on the wi-hpc cluster is supported via Apptainer (Singularity) containers using Rocker images. This page shows a simple workflow to launch RStudio inside an Apptainer/Rocker container via a Slurm batch job and connect to it from your workstation with an SSH tunnel.

Still in development

This setup is new and not fully tested. If you encounter problems, please report them to the IT Help Center and include your job output file and job id.

Getting started

Prerequisites

  • SSH access to the cluster and a working account
  • apptainer (or singularity) available on the compute nodes
  • The Rocker module or image configured for your cluster
  • A web browser on your local machine

Load the Rocker/R module (example):

module load R/rocker_4.5.1

The Rocker module on this cluster typically sets an environment variable named ROCKER that points to the helper scripts or image directory. You can confirm what the module provides:

echo "$ROCKER"
module show R/rocker_4.5.1

Copy the example submission script to a location you can edit.

cp "$ROCKER/rocker.sh" ~/rocker.sh

Edit the submission script with your preferred editor:

nano ~/rocker.sh
# or
vim  ~/rocker.sh
# or
emacs ~/rocker.sh

Submit the script with Slurm:

sbatch ~/rocker.sh

Example: enabling GPUs

If you need GPUs, request them in the SBATCH header and ensure the container is run with the --nv flag. Example SBATCH lines and a run command:

# In your rocker.sh SBATCH header
#SBATCH --partition=gpu
#SBATCH --gres=gpu:1   # request 1 GPU

# Then when invoking the container (inside the job script):
apptainer exec --cleanenv --nv /path/to/rocker.sif R

Adjust the image path and flags for your local setup. If the module provides a wrapper, consult module show output or the cluster admins.

What to expect (job output)

The job --output or --error file will contain instructions and the compute node/port to use. Example (do not share any printed passwords or tokens):

===========================================
RStudio Server is running!

Compute node: node042
Port:         8787

On your LOCAL machine, run:

    ssh -N -L 8787:node042:8787 <your-username>@wi-hpc

Then open in your browser:

    http://localhost:8787

Log in with your cluster credentials (or the temporary token printed by the job). Do NOT publish printed tokens.

To stop the session, cancel the job:

    scancel <job-id>
===========================================

Verification and troubleshooting

  • Check job status and find the job id:
squeue -u $USER
# or for a specific job
squeue -j <job-id>
  • Tail the job output to see the printed node and port:
tail -n 200 my-job-output-file.out
  • On the compute node (interactive or via the job output) you can check the service is listening:
# run on the compute node
ss -ltnp | grep 8787   # or: lsof -i :8787
  • If the SSH tunnel fails, verify you can SSH to the cluster and that the compute node name and port match those in the job output.

Security notes

  • Always use an SSH tunnel to access RStudio; do not expose the service on public interfaces.
  • If the job prints a temporary password or token, treat it as secret and do not share it.
  • If you need longer-term remote access for multiple users, discuss a managed solution with IT.

Example connection and shutdown

  1. Start the SSH tunnel on your local machine (replace placeholders):
ssh -N -L 8787:node042:8787 <your-username>@wi-hpc

Open your browser to http://localhost:8787 and log in.

When finished, cancel the SLURM job to free resources:

scancel <job-id>

Stop the SSH tunnel by pressing Ctrl-C in the terminal where you started it.


If you'd like, I can also add a ready-to-use rocker.sh template tuned for this cluster (GPU and non-GPU variants) and place it in docs/examples/.