A Vite + React front-end and a Go (Gin) back-end that execute Python code inside isolated Docker containers (CPU or optional GPU).
Write code cells, add Markdown comments and images, upload data files, and choose a runtime
Python (3.11), Base, ML, Deep (PyTorch), or Deep (TensorFlow/Keras) with a matching resource preset.
Run a single cell or Run All; kernels stay alive between runs, output streams back in real time via WebSockets, and plots are captured automatically and shown inline.
New UI features include a light/dark/Neon theme switcher, system stats panel (CPU/RAM/GPU activity), an improved runtime dropdown with deep-learning profiles, and open/download actions for working with existing notebooks.
Frontend/UI Demo The live deployment currently includes only the UI — backend runtimes and Docker-based execution are not active in this environment.
You can explore the interface, create/edit notebooks, and see simulated execution outputs (Demo Mode), but actual code execution is disabled.
🌐 Live UI: python-notebook-compiler
|
|
|
|
- Highlights
- Architecture
- Tech stack
- Running the project
- How it works (execution flow)
- Front-end overview
- Back-end overview
- Runtime images
- API
- Keyboard shortcuts
- Export / download
- Environment variables
- Security notes
- Troubleshooting
- Cleaning up / Removing the project (Windows, Docker Desktop + WSL2)
- Previews
-
Notebook-style blocks:
code,comment (Markdown), andimageblocks in a linear notebook flow. -
Drag & drop layout: Reorder code, markdown, and image blocks with drag & drop; duplicate or delete blocks quickly.
-
Execute: Run a single cell or Run All; cancel a running cell from the UI.
-
Real-time output (WebSockets): Code execution streams back line-by-line over WebSockets, so long-running cells show live logs instead of waiting for the end.
-
Kernels & state: Later cells see the state defined in earlier ones (imports, variables, functions), giving a notebook-like workflow rather than isolated “scripts”.
-
Plot capture: In
Base,ML, andDeepruntimes,matplotlibis patched so figures are saved server-side and returned to the UI as inline PNGs. -
Runtime selection:
- Python (3.11 slim) — minimal runtime for quick scripts.
- Base — numpy, pandas, scipy, scikit-learn, matplotlib, seaborn, pillow, requests, bs4, lxml, pyarrow, openpyxl.
- ML — Base + xgboost, lightgbm.
- Deep (PyTorch) — Base + PyTorch stack for deep learning workflows.
- Deep (TensorFlow/Keras) — Base + TensorFlow/Keras stack.
-
CPU/GPU resources: Small/Medium/Large presets map to Docker
--memory/--cpus, and deep-learning runtimes expose an optional GPU-enabled profile when available. -
Files panel: Drag & drop or file picker, per-file and total quota indicators, toast notifications, and remove files with one click.
-
Open & export notebooks: Work in the browser and download notebooks as
.ipynb(Jupyter-compatible) or.pyscripts; PDF export infrastructure is present (button currently disabled). -
Line numbers & highlighting: Custom tokenizer for Python syntax highlighting with line numbers, matching the notebook feel.
-
Markdown comments: Toolbar for heading/bold/italic/code/list/link, live preview, undo/redo, and keyboard shortcuts for a smooth note-taking/documentation experience.
-
Themes & stats: Light/Dark/Neon themes and a system stats panel to monitor CPU/RAM/GPU activity while cells are running.
Client (Vite/React)
├─ Notebook
│ ├─ Block flow (code / markdown / image)
│ ├─ Run / Run All / Cancel
│ ├─ Runtime & resource presets (CPU / RAM / GPU flag)
│ └─ Theme switcher (Light / Dark / Neon)
├─ CodeEditor
│ ├─ Line numbers, custom syntax highlighting
│ ├─ Tab indent, Ctrl/Cmd+Enter to run
│ └─ Real-time output area (streamed via WebSockets)
├─ CommentBlock (Markdown + toolbar, live preview, undo/redo)
├─ FilesPanel (uploads, quotas, drag & drop, toasts)
├─ DownloadMenu (.ipynb / .py exports, PDF infra)
├─ SystemStats (CPU / RAM / GPU activity during runs)
└─ KernelReconnectModal / It will appear if the kernel is inactivated for a while.
Server (Go + Gin)
├─ REST: POST /execute
│ ├─ Validates size limits from base64 payloads
│ ├─ Creates temp workspace (user code + uploaded files)
│ ├─ Generates runner.py (matplotlib/deep-learning patch by runtime)
│ └─ Runs isolated Docker container (no network)
├─ WebSockets: streaming execution endpoint
│ ├─ Starts container / kernel for the selected runtime
│ ├─ Streams stdout/stderr chunks back to the client in real-time
│ └─ Emits execution status (started / completed / error / cancelled)
├─ Kernel / container manager
│ ├─ Maps notebooks to short-lived containers (stateful across cells)
│ ├─ Enforces CPU/RAM caps and optional GPU profile
│ └─ Cleans up idle/finished kernels
└─ Response builder
├─ Collects combined output + duration
└─ Reads _plots/*.png, encodes as data URIs for the UI
Runtimes (Docker)
├─ python:3.11-slim # minimal Python runtime
├─ py-sandbox:base # scientific stack (numpy/pandas/plotting/IO)
├─ py-sandbox:ml # Base + xgboost, lightgbm
├─ Deep (PyTorch) runtime # Base + PyTorch stack (CPU/GPU profile)
└─ Deep (TensorFlow/Keras) runtime # Base + TF/Keras stack (CPU/GPU profile)
Data flow:
UI → select runtime & resources → REST /execute or WebSocket execution channel → temp dir on server → runner.py → docker run (isolated container) → stream stdout/stderr + read _plots/*.png → send text + inline PNGs back to the UI.
Front-end
- React 18 + Vite
- Drag & drop:
@hello-pangea/dnd - Markdown:
react-markdown+ custom toolbar (headings, bold/italic, code, lists, links) - Icons:
lucide-react,react-icons - PDF export infrastructure:
jspdf(PDF button currently disabled) - Custom syntax highlighter (in-repo tokenizer,
highlight.js-style) — rule-based, no external highlighter library - Real-time output: browser WebSocket client that streams logs line-by-line while a cell is running
- Themes & layout: Light / Dark / Neon theme switcher and a block-based notebook layout
- System stats:
SystemStatspanel showing CPU / RAM / GPU activity during execution
Back-end
- Go 1.23
- Gin web framework + CORS middleware
- REST API:
POST /executeto submit code execution requests - WebSockets: execution streaming endpoint for real-time stdout/stderr and status (started / completed / error / cancelled)
- Docker integration: uses the Docker CLI to spawn short-lived containers per request
--network none(no internet access)--cpus,--memory,--pids-limitfor resource caps- Optional GPU support via the
nvidiaruntime when GPU-enabled images are available
Runtimes (Docker images)
-
python:3.11-slim
Minimal Python runtime for quick scripts (no matplotlib; use Base/ML/Deep for plotting). -
py-sandbox:base
Scientific stack:numpy pandas scipy scikit-learn matplotlib seaborn pillow requests beautifulsoup4 lxml pyarrow openpyxl. -
py-sandbox:ml
Base +xgboost lightgbmfor classical ML workflows. -
Deep (PyTorch) image
Base stack + PyTorch for deep learning notebooks (CPU, with optional GPU variant depending on your Docker setup). -
Deep (TensorFlow/Keras) image
Base stack + TensorFlow/Keras for deep learning notebooks (CPU, with optional GPU variant).
Prereqs: Docker Desktop running (Windows/macOS) or Docker daemon (Linux).
Optional for local dev: Node 18+, Go 1.23+.
Use this when you have no images/containers/volumes yet (e.g., after a full clean).
- Go to the project folder
cd "C:\Users\<USERNAME>\OneDrive\Desktop\python-notebook-compiler"- Build the Python Base image
docker compose build pybaseContains Python 3.11 + scientific libs (numpy, pandas, matplotlib, …).
- Build the Python ML image
docker compose build pymlBuilds on top of Base (adds xgboost, lightgbm).
- Build the Deep Learning images (CPU + GPU)
docker compose build pytorchcpu tflowcpu pytorchgpu tflowgpuBuilds the CPU and GPU images for the deep-learning runtimes (pytorchcpu, tflowcpu, pytorchgpu, tflowgpu).
- Build the remaining services
docker compose buildBuilds server, client (and nginx base) images.
- Start everything (detached)
docker compose up -d- (Optional) Verify containers are up
docker psYou should see all services in Up state.
Default endpoints
- Frontend → http://localhost:5173
- Backend → http://localhost:8080
- Go to the project folder
cd "C:\Users\<USERNAME>\OneDrive\Desktop\python-notebook-compiler"- Start services
docker compose up -dIf you changed code and need a rebuild for specific services:
# Frontend / backend
docker compose build client
docker compose build server
# Deep-learning runtimes (if you changed their Dockerfiles)
docker compose build pytorchcpu tflowcpu pytorchgpu tflowgpu
docker compose up -ddocker compose build client: changes on the client side. (rebuilds the client image.) docker compose build server: changes on the server side. (rebuilds the server image.) Tip: docker compose up --build -d also works to rebuild what’s needed automatically.
When the Dockerfile or dependencies have changed (Full clean rebuild)
docker compose down --volumes --remove-orphans
docker compose build --no-cache
docker compose up -dUse this when:
-
You made changes to a Dockerfile (e.g., added new
RUN,COPY, orENVinstructions). -
The service’s dependencies have changed (
pip install,npm install, system packages). -
You want to clear the Docker build cache (
--no-cache→ rebuild everything from scratch). -
You want to remove any old dependencies or data stored in volumes (
--volumes). -
You want to clean up containers from the compose file that are no longer defined (
--remove-orphans).
💡 In short:
-
Code changed → Partial rebuild (
docker compose build serviceNameordocker compose up --build -d) -
Dockerfile / dependencies changed → Full rebuild + volume cleanup (
docker compose down --volumes --remove-orphans && docker compose build --no-cache && docker compose up -d)
Important — build Docker runtimes at least once
- If you plan to use the Base/ML/Deep runtimes, you must first build the Docker images (see 1) Run with Docker → First-time setup) so those images exist.
- Local mode does not include those Python environments unless you install the dependencies yourself; use the plain Python runtime locally otherwise.
- Server (Go/Gin
cd server
go mod tidy
go run main.go- Client (Vite/React) — open a new terminal:
cd client
npm install
npm run devMake sure client/.env contains:
VITE_BACKEND_URL=http://localhost:8080
VITE_TOTAL_UPLOAD_LIMIT=52428800
VITE_SINGLE_FILE_LIMIT=5242880- Server
cd server
go run main.go- Client
cd client
npm run dev-
Docker Desktop must be running before any
docker composecommands. -
If
docker compose build pymlfails, you likely haven’t built pybase first. Run:
docker compose build pybase
docker compose build pyml- Deep-learning images may also depend on Base/ML layers. If a DL build fails, rebuild in order:
docker compose build pybase pyml
docker compose build pytorchcpu tflowcpu pytorchgpu tflowgpu-
GPU availability is checked from inside Docker, not on the bare host. Deep runtimes (Torch/TF with GPU) only enable the
GPUpreset if Docker can see an NVIDIA runtime andnvidia-smiworks inside the container. It is therefore possible that your local Python scripts can use the GPU, while PyLab reports “GPU not available on host, falling back to CPU” because Docker itself has no access to the GPU. -
If you deleted images, you must rebuild Base, ML and Deep again.
-
If you delete volumes, any data stored there will be lost. (This project doesn’t ship a DB by default, but the warning applies if you add one.)
-
Windows/macOS vs Linux: Commands are the same; on Linux ensure your user has access to
/var/run/docker.sock(add user to thedockergroup and re-login). -
Ports: 5173 (frontend), 8080 (backend). Adjust if they clash with other services on your machine.
Before any cell runs, the client starts a kernel:
- The UI calls a “start kernel” endpoint with:
runtime→"python" | "base" | "ml" | "torch" | "tf"mem→"256m" | "512m" | "1g" | "2g"cpu→"0.25" | "0.5" | "1.0" | "2.0"accelerator→"none"or"gpu"
On the server:
- A temp directory is created (e.g.
kernel-XXXX), with subfolders:_plots/for figuresrequests/for incoming cell requests
- A fresh copy of
kernel_server.pyis written into this temp folder. - Based on the requested runtime + accelerator:
python→python:3.11-slimbase→py-sandbox:baseml→py-sandbox:mltorch→py-sandbox:torch-cpu/py-sandbox:torch-gputf→py-sandbox:tf-cpu/py-sandbox:tf-gpu
- GPU support is detected with
docker info(runtimes) andnvidia-smi; if GPU is requested but unavailable, the server falls back to CPU and returns a small note.
A long-lived Docker container is then started:
docker run -d-v <tempDir>:/codeand-w /code--network none--memory <mem>and--cpus <cpu>--pids-limit 256(CPU) or512(GPU)--gpus 1for GPU kernels- multiple
-eenvs for thread limits and TF/Torch behavior:OMP_NUM_THREADS,MKL_NUM_THREADS,OPENBLAS_NUM_THREADS,NUMEXPR_NUM_THREADSTF_NUM_INTRAOP_THREADS,TF_NUM_INTEROP_THREADS,TF_FORCE_GPU_ALLOW_GROWTHTORCH_NUM_THREADS,TORCH_NUM_INTEROP_THREADS
The container runs:
python -u kernel_server.pyThe backend keeps a KernelInfo record in memory:
-
KernelID,ContainerName,TempDir,Runtime,Mem,CPU,Accelerator -
LastActivityandBusyRuns(how many cells are currently running)
The KernelID is returned to the client and used for later cell executions.
When you hit Run on a cell, the client sends a JSON payload to the ExecuteInKernel endpoint:
{
"kernelId": "<KernelID>",
"code": "print('hello')",
"files": [{ "name": "data.csv", "data": "data:text/csv;base64,AAAA..." }],
"cellId": "<optional-cell-id>"
}On the server:
-
The handler looks up the kernel by
kernelId. If it’s missing, it returns 404. -
beginRun(kernelId)marks the kernel as busy and prevents it from being treated as idle. -
All uploaded files are decoded and written into the kernel’s temp directory, preserving their relative paths.
-
A
cellIdis chosen (either from the request or auto-generated). -
A small request JSON is written into the kernel temp dir:
// <tempDir>/requests/<cellId>.json
{ "cell_id": "<cellId>", "code": "<code>" }This file is what the Python kernel loop will pick up and execute.
As soon as a request is queued, the server starts a log watcher goroutine for that cell:
-
It creates a
live_<cellId>.flagfile to indicate that live streaming is active. -
It tails
live_output.loginside the kernel temp dir.kernel_server.pywrites to this file using a[cellId]tag prefix for each active cell. -
The watcher:
-
Finds segments tagged with the current
[cellId]. -
Handles
\r(carriage return) based progress lines so that only the latest “in-place” progress is shown. -
Strips ANSI escape codes.
-
Sends each logical line into the global
Broadcastchannel as JSON, e.g.:{ "event":"kernel_log", "id":"<KernelID>", "cell_id":"<cellId>", "line":"...", "overwrite":false }
-
-
The
/wsWebSocket hub reads fromBroadcastand fans out these messages to all connected clients. -
Once a
result_<cellId>.jsonfile appears and the log file stays quiet for a short “linger” window, the watcher:-
flushes any pending progress line,
-
writes a
delivered_<cellId>.logcopy, -
removes
live_<cellId>.flag, -
and exits.
-
On the frontend, this gives real-time, line-by-line output while the cell is running.
Inside the container, kernel_server.py runs a simple loop over the requests/ directory:
-
It scans for
*.jsonfiles inrequests/. -
For each file it loads
{cell_id, code}, then calls:
run_cell(cell_id, code_str)run_cell does the following:
-
Sets
RUNNING_CELL_IDso thatTeeStreamcan tag linesin live_output.logas[cellId] .... -
Patches
sys.stdoutandsys.stderrto duplicate output:-
one copy to the base stream +
live_output.log(for WebSocket streaming), -
one copy to in-memory buffers (
stdout_buf,stderr_buf) used in the final result.
-
-
Executes the user code in a shared namespace:
GLOBAL_NS = {"__name__": "__main__"}
LOCAL_NS = GLOBAL_NS
exec(code_str, GLOBAL_NS, LOCAL_NS)This means variables, imports, functions, and classes persist across cells, giving true notebook-style state.
-
Uses a patched
matplotlibbackend (Agg) andsave_all_figs()to:-
save each open figure to
_plots/plot_*.png, -
base64-encode them as
"data:image/png;base64,...".
-
-
Catches errors:
-
KeyboardInterrupt(coming fromSIGINT) is turned into a friendly"KeyboardInterrupt: Execution interrupted by user.". -
Other exceptions are converted to a full Python traceback via
traceback.format_exc().
-
-
After execution it:
-
flushes streams and
live_log, -
applies a short adaptive linger (up to ~0.6 seconds) to capture slightly delayed prints/plots,
-
re-runs
save_all_figs()to catch any late figures, -
writes a result JSON:
{ "stdout": "<captured stdout>", "stderr": "<captured stderr>", "images": ["data:image/png;base64,..."], "error": "<traceback-or-empty>", "duration": <seconds>, "interrupted": <true|false> }into
result_<cellId>.json.
-
-
Restores
sys.stdout/sys.stderr, resetsRUNNING_CELL_ID, and re-installs a no-op SIGINT handler until the next cell.
Back in ExecuteInKernel:
-
The handler enters a loop:
-
It periodically checks if the Docker container is still running (
docker ps --filter name=...).- If the container vanished (e.g., user stopped it), a
499JSON ("kernel stopped by user") is returned.
- If the container vanished (e.g., user stopped it), a
-
It checks for
result_<cellId>.jsonin the temp dir. -
It respects a global timeout:
EXECUTE_TIMEOUT_SECONDS(from env).
-
If the timeout is reached:
- The server sends:
docker kill --signal=SIGINT <containerName>-
Writes a synthetic result JSON with
error = "timeout waiting kernel". -
Returns a
504response:
{
"error": "timeout waiting kernel",
"kernelTimeout": true,
"cellId": "<cellId>",
"timeoutSeconds": <EXECUTE_TIMEOUT_SECONDS>
}If a real result is found:
-
It is loaded into a Go struct
{Stdout, Stderr, Images, Error, Duration, ...}and theresult_*.jsonfile is removed. -
Both
StdoutandStderrare passed throughcollapseCR:-
\r\n→\n, -
progress lines with
\rare compacted to their final form, -
ANSI escape sequences are stripped.
-
-
If the
live_<cellId>.flagstill exists, bufferedStdoutis discarded so we don’t double-render text that was already streamed over WebSockets. -
Oversized outputs are trimmed to the last 256KB to avoid huge responses.
Finally, a human-readable Output string is assembled:
<Stdout>
STDERR:
<Stderr>
Cell ran in <Duration> sec
ERROR:
<Error-if-any>
This, together with Images and CellID, is returned as the JSON response.
-
State across cells Because all cells run inside the same Python process and shared
GLOBAL_NS, later cells can freely reuse earlier definitions (very similar to Jupyter notebooks). -
Interrupting a cell The dedicated Interrupt endpoint sends
SIGINTto the kernel container:docker kill --signal=SIGINT <containerName>
This triggers a KeyboardInterrupt inside run_cell, which is turned into a clean error message while keeping the kernel alive for the next requests.
-
Idle cleanup Each
KernelInfotracksLastActivityandBusyRuns. A background sweeper (configured viaKERNEL_IDLE_MINUTESandIDLE_SWEEP_SECONDS) can periodically:-
detect kernels that have been idle for too long,
-
stop their Docker containers,
-
and remove their temp directories to reclaim resources.
-
-
Notebook.jsx
- Owns the notebook state: list of blocks (code / markdown / image), drag & drop ordering, notebook title.
- Handles Run / Run All / Stop:
- ensures a kernel is started for the selected runtime + resources + accelerator (CPU/GPU),
- sends per-cell execution requests (
kernelId,cellId,code,files), - subscribes to WebSocket kernel_log events and merges streamed logs with the final result.
- Tracks per-cell status flags (
isRunning,isPending,isExecuted, timeout/interrupted state) and updates outputs in real time. - Exposes runtime controls: runtime dropdown (
Python,Base,ML,Torch,TF), mem/CPU presets, and GPU optional toggle. - Integrates the theme switcher (Light / Dark / Neon) and toggles the SystemStats panel.
-
CodeEditor
- Textarea + overlaid custom highlighter with global line numbers (via
startLineoffset). - Keyboard shortcuts:
Tabinserts two spaces; Ctrl/Cmd + Enter runs the current cell. - Renders the cell’s live output panel, which shows WebSocket-streamed logs while the cell is running and the final combined output when it completes.
- Supports scrollable output, execution duration footer, and basic error styling.
- Textarea + overlaid custom highlighter with global line numbers (via
-
CommentBlock
- Markdown editor with undo/redo (Ctrl+Z / Ctrl+Y or Ctrl+Shift+Z), Tab indent, and Ctrl/Cmd + Enter to “commit” the comment.
- Rich toolbar: headings, bold/italic, inline/code blocks, ordered/unordered lists, and smart link insertion with
https://normalization. - Live preview area so documentation and notes can be edited and previewed in-place between code cells.
-
FilesPanel
- Drag & drop upload area plus multi-select file input.
- Shows per-file status (
reading→done/error) and uses toast notifications on success/failure. - Mirrors backend limits using
VITE_TOTAL_UPLOAD_LIMITandVITE_SINGLE_FILE_LIMIT, and shows a global quota bar for total uploaded size. - Allows removing individual files so the next cell run uses a clean workspace.
-
DownloadMenu
- Jupyter Notebook (
.ipynb): code blocks become Jupyter code cells, markdown blocks become markdown cells, and the latest text output is attached as astreamoutput. - Python script (
.py): comment blocks are converted to#comment lines; blocks are separated by blank lines in execution order. - PDF: the groundwork is wired via
jspdf; the button is visible but disabled until the export flow is finalized.
- Jupyter Notebook (
-
SystemStats & kernel UI
- SystemStats component uses a dedicated WebSocket endpoint to stream CPU / RAM / disk / GPU usage (including VRAM and temperature) every few seconds.
- A small kernel/status area surfaces GPU availability notes, current runtime/profile, and reconnect information.
- A KernelReconnectModal (or similar UI) is shown when the backend reports that a kernel was stopped or lost, offering to start a fresh kernel and continue working from the current notebook state.
-
Kernel management (Go + Gin)
-
Maintains an in-memory registry of live kernels:
type KernelInfo struct { KernelID string ContainerName string TempDir string Runtime string Mem string CPU string Accelerator string // "none" | "gpu" LastActivity time.Time BusyRuns int }
-
Each kernel corresponds to a dedicated Docker container running
kernel_server.pyin a private temp workspace. -
Background sweepers use env variables like
KERNEL_IDLE_MINUTESandIDLE_SWEEP_SECONDSto detect idle kernels, stop their containers, and remove temp directories.
-
-
Execution endpoints
-
Start kernel
-
Accepts
{runtime, mem, cpu, accelerator}. -
Creates a temp dir, writes
kernel_server.py, prepares_plots/andrequests/folders. -
Selects the correct runtime image (Python / Base / ML / Torch / TF, CPU or GPU).
-
Starts a detached Docker container running:
python -u kernel_server.py
-
Registers the kernel in
Kernelsand returns akernelId(plus an optional GPU fallback note).
-
-
Execute in kernel
-
Accepts
{kernelId, code, files[], cellId}. -
Writes uploaded files into the kernel’s temp dir.
-
Drops a JSON request file into
requests/<cellId>.jsonfor the Python kernel loop. -
Spawns a log watcher goroutine that tails
live_output.logand streams tagged lines (per cell) over WebSockets. -
Waits for
result_<cellId>.json, enforcingEXECUTE_TIMEOUT_SECONDS. -
On timeout, sends
SIGINTto the container and returns a 504 JSON withkernelTimeout: true. -
On success, returns:
{ "output": "combined stdout/stderr + duration", "images": ["data:image/png;base64,..."], "error": "", "cellId": "<cellId>" }
-
-
Interrupt kernel
- Sends
docker kill --signal=SIGINT <containerName>to stop the current cell only. - The kernel process stays alive;
kernel_server.pyturns this into a cleanKeyboardInterruptfor that cell.
- Sends
-
-
Real-time streaming (WebSockets)
-
A central hub reads JSON strings from a buffered
Broadcastchannel and fan-outs messages to all connected WebSocket clients. -
The
ExecuteInKernelwatcher writes messages such as:{ "event": "kernel_log", "id": "<kernelId>", "cell_id": "<cellId>", "line": "printed line...", "overwrite": false } -
Clients subscribe once and receive live logs for all running cells, including “overwrite” lines for progress-style outputs.
-
-
Python kernel process (
kernel_server.py)-
Runs an infinite loop over the
requests/directory; for each*.json:- Reads
{cell_id, code}. - Executes the code in a shared namespace so state persists across cells.
- Uses a
TeeStreamto mirror stdout/stderr both to the console andlive_output.logwith a[cellId]tag. - Captures plots into
_plots/and encodes them asdata:image/png;base64,.... - Writes
result_<cellId>.jsonwithstdout,stderr,images,error,duration, andinterruptedflags.
- Reads
-
Enforces conservative thread limits via env vars (OMP/MKL/OPENBLAS/NUMEXPR, TF intra/inter op, Torch threads) and can enable TF GPU memory growth when requested.
-
-
System stats & monitoring
-
HTTP endpoint returns a JSON snapshot of:
- CPU usage and core count (via
gopsutil/cpuandhost.Info()), - RAM and disk usage (via
gopsutil/mem,gopsutil/disk), - GPU name, load, VRAM usage/total, and temperature (parsed from
nvidia-smiwhen available).
- CPU usage and core count (via
-
A dedicated SystemStats WebSocket pushes the same metrics every few seconds, powering the live stats panel in the UI.
-
-
Docker integration & resource caps
-
Every kernel container is started with strict isolation:
--network none--memory <mem>and--cpus <cpu>(clamped viaDEFAULT_MEM,DEFAULT_CPU,MAX_MEM,MAX_CPU)--pids-limit 256for CPU kernels and512for GPU kernels--gpus 1for GPU-enabled Torch/TF runtimes- environment variables controlling thread counts and TF/Torch behavior
-
This keeps each notebook execution sandboxed, resource-bounded, and independent of other kernels running on the same host.
-
-
python:3.11-slim
Minimal Python 3.11 runtime used for the “Python” profile.
Good for quick scripts and experiments; does not ship matplotlib by default (use Base/ML/Deep for plotting). -
py-sandbox:base
General-purpose scientific stack for the “Base” runtime:
numpy,pandas,scipy,scikit-learn,matplotlib,seaborn,pillow,
requests,beautifulsoup4,lxml,pyarrow,openpyxl. -
py-sandbox:ml
Extends Base with classic ML libraries for the “ML” runtime:
xgboost,lightgbm. -
py-sandbox:torch-cpu
Deep-learning image for the “Torch (CPU)” runtime:
Base stack + PyTorch CPU build, configured with conservative thread limits
(TORCH_NUM_THREADS,TORCH_NUM_INTEROP_THREADS). -
py-sandbox:torch-gpu
GPU-enabled deep-learning image for the “Torch (GPU)” runtime:
Base stack + PyTorch with CUDA support, intended to be run with--gpus 1and
controlled via the same Torch thread env vars. -
py-sandbox:tf-cpu
Deep-learning image for the “TF (CPU)” runtime:
Base stack + TensorFlow/Keras CPU build, with intra/inter-op threads bounded by
TF_NUM_INTRAOP_THREADSandTF_NUM_INTEROP_THREADS. -
py-sandbox:tf-gpu
GPU-enabled deep-learning image for the “TF (GPU)” runtime:
Base stack + TensorFlow/Keras with CUDA, honoringTF_FORCE_GPU_ALLOW_GROWTH
so GPU memory growth can be enabled when requested.
All images are built via docker-compose.yml using builder services such as:
docker compose build pybase pyml pytorchcpu tflowcpu pytorchgpu tflowgpuand then used by the backend based on the selected runtime and accelerator (CPU/GPU).
POST /api/kernels/start
Starts a new isolated kernel (backed by a Docker container running kernel_server.py) for a given runtime + resource preset.
Request body
{
"runtime": "python | base | ml | torch | tf",
"mem": "256m | 512m | 1g | 2g",
"cpu": "0.25 | 0.5 | 1.0 | 2.0",
"accelerator": "none | gpu"
}Response
{
"kernelId": "k-1730918470123456-123456",
"note": "⚠ GPU not available on host. Falling back to CPU."
}noteis optional and typically used to report GPU fallbacks (e.g., GPU requested but not available, or runtime not GPU-capable).
POST /api/kernels/execute
Executes a single cell inside an already running kernel. The kernel keeps state across calls.
Request body
{
"kernelId": "k-1730918470123456-123456",
"cellId": "optional-cell-id", // optional; autogenerated if omitted
"code": "print('hello from PyLab')",
"files": [
{
"name": "data/example.csv",
"data": "data:text/csv;base64,AAAA..."
}
]
}Response (HTTP 200 on success)
{
"output": "hello from PyLab\n\nCell ran in 0.42 sec\n",
"images": ["data:image/png;base64,..."],
"error": "",
"cellId": "resolved-cell-id"
}Timeout response (HTTP 504)
{
"error": "timeout waiting kernel",
"kernelTimeout": true,
"cellId": "resolved-cell-id",
"timeoutSeconds": 60
}If the kernel container disappears (e.g. manually stopped), the handler returns a 4xx/5xx JSON with "error": "kernel stopped by user".
###3. Interrupt a running cell
POST /api/kernels/:id/interrupt
Sends a SIGINT into the kernel container to interrupt the currently running cell (the kernel process stays alive).
Response
{
"status": "interrupted"
}If the kernel does not exist, a 404 JSON is returned:
{
"error": "kernel not found"
}GET /ws
Upgrades to a WebSocket connection and subscribes to kernel events broadcast by the backend.
The Go server pushes JSON strings, for example:
{
"event": "kernel_log",
"id": "k-1730918470123456-123456",
"cell_id": "cell-1",
"line": "Epoch 1/10 - loss=0.123",
"overwrite": false
}-
overwrite: trueis used for progress-style lines that are meant to be redrawn in-place (handled by the frontend). -
The client typically opens this WebSocket once and listens for logs for all cells in all kernels it cares about.
GET /api/system-stats
Returns a single JSON snapshot of system-level metrics (CPU, RAM, disk, GPU, OS):
{
"cpu": {
"usage": 23.5,
"cores": 16
},
"ram": {
"used": 12.3,
"total": 31.9
},
"disk": {
"used": 250.1,
"total": 476.9
},
"gpu": {
"name": "NVIDIA GeForce RTX ...",
"load": 12.0,
"vram_used": 1024.0,
"vram_total": 8192.0,
"temp": 52.0
},
"system": {
"os": "windows" | "linux" | "darwin",
"uptimeHours": 1234
}
}Values/units are approximate and depend on the host OS and nvidia-smi availability.
GET /api/system-stats/ws
Upgrades to WebSocket and pushes the same structure as GET /api/system-stats every few seconds, e.g.:
{
"cpu": { "...": "..." },
"ram": { "...": "..." },
"disk": { "...": "..." },
"gpu": { "...": "..." },
"system": { "...": "..." }
}This powers the live SystemStats panel in the UI.
For simpler setups, a legacy one-shot endpoint can be exposed:
POST /execute
Runs code in a fresh throwaway container using the PY_IMAGE env (defaults to python:3.11-slim) and returns combined output + plots, without starting a reusable kernel.
The kernel-based endpoints above are the recommended path for full notebook-style workflows.
-
Code cells
Ctrl/Cmd + Enter→ run the current cell.Tab→ insert two spaces (soft indent inside the editor).
-
Comment (Markdown) cells
Ctrl/Cmd + Enter→ commit the comment and update the preview.Ctrl + Z / Ctrl + YorCtrl + Shift + Z→ undo / redo.Tab→ indent the current line or selection.
Shortcuts are designed to feel like a minimal notebook: run-on-enter for code, and quick commit/preview for markdown.
-
Jupyter Notebook (
.ipynb)- Code blocks are exported as Jupyter code cells.
- Comment blocks become markdown cells.
- The latest text output for each code cell is attached as a
streamoutput.
-
Python script (
.py)- Comment blocks are converted into
#-prefixed comment lines. - Blocks are written in notebook order with blank lines between them.
- Comment blocks are converted into
-
PDF
- The PDF export pipeline is wired via
jspdf, but the button is currently disabled. - Once finalized, it will render the notebook (code + outputs) into a static PDF snapshot.
- The PDF export pipeline is wired via
Front-end (client/.env)
VITE_BACKEND_URL=http://localhost:8080
# === FILE UPLOAD LIMITS ===
VITE_TOTAL_UPLOAD_LIMIT=52428800 # 50 MB
VITE_SINGLE_FILE_LIMIT=5242880 # 5 MB-
VITE_BACKEND_URLFrontend → backend base URL.-
Local dev:
http://localhost:8080 -
In Docker Compose you would typically point this to
http://server:8080.
-
-
VITE_TOTAL_UPLOAD_LIMIT/VITE_SINGLE_FILE_LIMITUI-side mirrors of the backend upload limits (bytes).
These only control what the UI shows and allows; the real limits are enforced on the server.
Back-end (server/.env)
# === LIMITS ===
TOTAL_UPLOAD_LIMIT=52428800
SINGLE_FILE_LIMIT=5242880
# === PYTHON IMAGE (legacy) ===
PY_IMAGE=python:3.11-slim
# === DEFAULT PRESET (UI boş gönderirse) ===
DEFAULT_MEM=512m
DEFAULT_CPU=0.5
# === CAPS (server tarafı clamp) ===
MAX_MEM=2g
MAX_CPU=2.0
# === IDLE / EXECUTION ===
KERNEL_IDLE_MINUTES=5
IDLE_SWEEP_SECONDS=5
IDLE_DEBUG=1
EXECUTE_TIMEOUT_SECONDS=60
# === THREAD DEFAULTS (docker run -e ile child container’a geçilir) ===
OMP_NUM_THREADS_DEFAULT=1
MKL_NUM_THREADS_DEFAULT=1
OPENBLAS_NUM_THREADS_DEFAULT=1
NUMEXPR_NUM_THREADS_DEFAULT=1
TF_NUM_INTRAOP_THREADS_DEFAULT=1
TF_NUM_INTEROP_THREADS_DEFAULT=1
TF_FORCE_GPU_ALLOW_GROWTH_DEFAULT=true
# Torch defaults
TORCH_NUM_THREADS_DEFAULT=1
TORCH_NUM_INTEROP_THREADS_DEFAULT=1-
Upload limits
-
TOTAL_UPLOAD_LIMITMaximum total size of all files in a single request (bytes). Default in this repo:50 * 1024 * 1024(50 MB). -
SINGLE_FILE_LIMITMaximum size per file (bytes). Default in this repo:5 * 1024 * 1024(5 MB).
-
-
Legacy single-shot runtime
PY_IMAGEDefault Docker image for the legacy single-shot/executeflow (without kernels). Defaults topython:3.11-slim. Modern flows use kernel-based images (python,base,ml,torch,tf) instead.
-
Resource defaults & caps
-
DEFAULT_MEM→ default memory preset if the UI sends an empty value (256m | 512m | 1g | 2g, default:512m). -
DEFAULT_CPU→ default CPU preset if the UI sends an empty value (0.25 | 0.5 | 1.0 | 2.0default:0.5). -
MAX_MEM / MAX_CPU→ hard caps applied server-side. If the client sends a higher value, it will be clamped to these limits.
-
-
Idle / execution behavior
-
KERNEL_IDLE_MINUTESHow long a kernel can sit idle (no active runs) before it becomes a candidate for cleanup. -
IDLE_SWEEP_SECONDSHow often the sweeper checks for idle kernels. -
IDLE_DEBUGOptional debug flag for logging idle cleanup decisions. -
EXECUTE_TIMEOUT_SECONDSGlobal per-cell execution timeout. If a cell runs longer than this, the server sendsSIGINTto the kernel and returns a timeout response.
-
-
Thread defaults (propagated into containers)
These env vars are passed into each kernel container as -e flags and used by BLAS / TF / Torch to limit CPU usage inside the container:
OMP_NUM_THREADS_DEFAULT
MKL_NUM_THREADS_DEFAULT
OPENBLAS_NUM_THREADS_DEFAULT
NUMEXPR_NUM_THREADS_DEFAULT
TF_NUM_INTRAOP_THREADS_DEFAULT
TF_NUM_INTEROP_THREADS_DEFAULT
TORCH_NUM_THREADS_DEFAULT
TORCH_NUM_INTEROP_THREADS_DEFAULT-
TensorFlow GPU memory behavior
-
TF_FORCE_GPU_ALLOW_GROWTH_DEFAULTIf set to
true/1, TF kernels will enable GPU memory growth, so TensorFlow gradually allocates GPU memory instead of grabbing all available VRAM up front.
-
The server validates resource-related values and clamps them to safe defaults when they are out of range or missing.
-
Docker-isolated kernels
- Each notebook session runs inside its own Docker container (one container per kernel/runtime).
- Containers are started with:
--network none→ no outbound or inbound network access from user code,--memory <mem>and--cpus <cpu>→ hard RAM / CPU limits,--pids-limit 256for CPU kernels and--pids-limit 512for GPU kernels.
-
Filesystem isolation
- Every kernel gets a private temp directory (mounted as
/codein the container). - Uploaded files, request JSONs, logs, and plot images live only inside this workspace.
- When a kernel is stopped/cleaned up, its container and temp directory are removed.
- Every kernel gets a private temp directory (mounted as
-
GPU usage
- GPU-accelerated runtimes (
torch-gpu,tf-gpu) are started with--gpus 1and a constrained set of TF/Torch env vars. - TensorFlow kernels can enable memory growth (instead of grabbing all VRAM) via
TF_FORCE_GPU_ALLOW_GROWTH.
- GPU-accelerated runtimes (
-
Execution limits
- A global
EXECUTE_TIMEOUT_SECONDSprotects against very long-running cells:- on timeout, the server sends
SIGINTto the kernel container, - the running cell is interrupted with a clean
KeyboardInterruptand a 504 timeout response.
- on timeout, the server sends
- Thread-count env vars (OMP/MKL/OPENBLAS/NUMEXPR/TF/Torch) are set to conservative defaults to avoid oversubscribing CPU cores.
- A global
-
Idle cleanup
- Kernels track
LastActivityandBusyRuns. - A periodic sweeper (controlled by
KERNEL_IDLE_MINUTESandIDLE_SWEEP_SECONDS) can stop idle containers and delete their temp directories to reclaim resources.
- Kernels track
-
Deployment considerations
- The Go backend communicates with Docker via the Docker daemon (e.g. a mounted Docker socket in Compose).
- As with any system that can start containers on demand, this should be deployed only in trusted environments (e.g. your own dev machine, lab server, or controlled infra), not as a multi-tenant untrusted code execution service on the open internet.
-
“Docker is not running. Please start Docker Desktop.”
Make sure Docker Desktop (Windows/macOS) or the Docker daemon (Linux) is running before calling any
docker composecommands or starting kernels. -
Permission errors on Linux
If you see
permission deniedfordockeror the socket:- Ensure your user has access to
/var/run/docker.sock:- add your user to the
dockergroup, - log out and log back in.
- add your user to the
- Test with:
docker ps.
- Ensure your user has access to
-
WSL2 / Windows
- Use Docker Desktop with the WSL2 backend enabled.
- Keep an eye on available disk space inside the WSL distribution — large images (especially deep-learning ones) can fill it quickly.
-
No plots displayed
- Make sure you are using a plotting-capable runtime: Base, ML, Torch, or TF.
- The plain Python runtime does not ship
matplotlibby default. - If using Torch/TF, confirm that
matplotlibis installed in the corresponding image (it is in the base stack).
-
Upload limit exceeded
- The backend enforces
SINGLE_FILE_LIMITandTOTAL_UPLOAD_LIMIT. - Increase the values in
server/.envif needed, and update the UI mirrors inclient/.env:VITE_SINGLE_FILE_LIMITVITE_TOTAL_UPLOAD_LIMIT
- The backend enforces
-
GPU runtime fails to start / “Falling back to CPU” note
- The
StartKernelendpoint may return anotelike:⚠ GPU not available on host. Falling back to CPU.⚠ GPU preset is available only for Torch/TF runtimes. Falling back to CPU.
- Check that:
nvidia-smiworks on the host,docker info --format '{{json .Runtimes}}'lists annvidiaruntime,- you selected a GPU-capable runtime: Torch or TF.
- The
-
GPU stats always show “None”
- The system stats endpoints rely on
nvidia-smi:- If it’s not installed or no NVIDIA GPU is present, the backend reports
gpu.name = "None".
- If it’s not installed or no NVIDIA GPU is present, the backend reports
- On laptops with hybrid graphics, make sure the app is running on the NVIDIA GPU.
- The system stats endpoints rely on
-
No live output in the UI (only final result)
- The frontend listens to
/wsforkernel_logevents. - If you see only the final output:
- check that your browser/dev environment allows WebSocket connections to the backend,
- verify the backend logs for WebSocket upgrade errors,
- ensure any reverse proxy or dev setup forwards WebSocket traffic correctly.
- The frontend listens to
-
Cells get stuck in “Running”
- If a cell appears stuck:
- click Stop / interrupt; this sends
SIGINTto the kernel container, - check the backend logs to see if
result_<cellId>.jsonis being written.
- click Stop / interrupt; this sends
- Very long-running cells are cut off by
EXECUTE_TIMEOUT_SECONDS; when that happens, the API returns a 504 withkernelTimeout: true.
- If a cell appears stuck:
⚠️ Warning: This will delete unused images, containers, volumes, and can free up significant disk space.
Do this only if you no longer need the project or want to reclaim space.
Note: Deep-learning images (Torch / TF, CPU+GPU variants) can be quite large, so running
docker system prunefrom time to time is especially useful if you rebuild images frequently.
💡 Tip: You can also remove containers, images, and volumes directly from Docker Desktop by right-clicking and selecting Delete (trash icon).
However, this does not shrink the WSL.vhdxfile, for reclaiming disk space, follow the manualprune+compact vdisksteps below.
Make sure no containers are running, then run (on powershell or project root terminal):
docker system prune -a --volumes -f
docker builder prune -a -f-
--volumeswill also delete unused named volumes. -
This cleans up inside Docker’s storage, but does not shrink the physical
.vhdxfile used by WSL.
Windows does not automatically shrink the .vhdx file when space is freed.
You need to compact it manually.
Steps:
-
Close Docker Desktop completely (Quit from system tray).
-
Shut down WSL:
Open powershell, then run:
wsl --shutdown -
Find your
ext4.vhdxpath:It’s usually one of these:
%LOCALAPPDATA%\Docker\wsl\data\ext4.vhdx %LOCALAPPDATA%\Docker\wsl\distro\docker-desktop-data\data\ext4.vhdx %LOCALAPPDATA%\Docker\wsl\main\ext4.vhdx %LOCALAPPDATA%\Docker\wsl\disk\ext4.vhdx
📍 Tip: You can quickly open the AppData\Local folder by typing %AppData% into the Windows search bar, then go one folder up to Local\Docker\wsl.... o find the
.vhdxfile.
Enable “Hidden items” in Explorer if you can’t see the folders.
-
Open PowerShell as Administrator and run:
diskpart
Inside
diskpart:select vdisk file="C:\Users\<USERNAME>\AppData\Local\Docker\wsl\main\ext4.vhdx" attach vdisk readonly compact vdisk detach vdisk exit
-
Replace with your Windows username.
-
If your ext4.vhdx is in disk or data folder instead of main, adjust the path accordingly.
After compacting, start Docker Desktop again. WSL will start automatically, and Docker will recreate any missing images the next time you run:
docker compose up --build-
docker system prune→ frees up space inside Docker’s virtual disk. -
compact vdisk→ shrinks the.vhdxfile itself, giving the freed space back to Windows.
The process is safe: it won’t remove active containers, images, or volumes you haven’t already deleted.
Light theme |
Dracula theme |
Matrix theme |
Neon theme |
Nord theme |
Solarized Light theme |
One Dark Pro theme |
Cyberpunk 2077 theme |






















