Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
40 changes: 40 additions & 0 deletions apps/backend/supabase/migrations/0006_rls.sql
Original file line number Diff line number Diff line change
@@ -1,3 +1,36 @@
-- Audit logging function for security events
CREATE TABLE IF NOT EXISTS audit_logs (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
event_type TEXT NOT NULL,
user_id UUID,
resource_type TEXT,
resource_id UUID,
action TEXT NOT NULL,
details JSONB,
ip_address INET,
created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
created_by UUID DEFAULT auth.uid()
);

-- Create index on audit_logs for efficient querying
CREATE INDEX IF NOT EXISTS idx_audit_logs_user_id ON audit_logs(user_id);
CREATE INDEX IF NOT EXISTS idx_audit_logs_created_at ON audit_logs(created_at);
CREATE INDEX IF NOT EXISTS idx_audit_logs_event_type ON audit_logs(event_type);

-- Helper function to log security events
CREATE OR REPLACE FUNCTION log_security_event(
event_type TEXT,
action TEXT,
resource_type TEXT DEFAULT NULL,
resource_id UUID DEFAULT NULL,
details JSONB DEFAULT NULL
) RETURNS VOID AS $$
BEGIN
INSERT INTO audit_logs (event_type, user_id, resource_type, resource_id, action, details, created_by)
VALUES (event_type, auth.uid(), resource_type, resource_id, action, details, auth.uid());
END;
$$ LANGUAGE plpgsql SECURITY DEFINER;

Comment on lines +1 to +33
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, find and read the migration file to verify its actual content
find . -type f -name "*0006_rls.sql" -o -name "*migrations*" | head -20

Repository: onlook-dev/onlook

Length of output: 174


🏁 Script executed:

# Read the migration file
cat -n apps/backend/supabase/migrations/0006_rls.sql 2>/dev/null || find . -type f -name "*0006*.sql" | head -5

Repository: onlook-dev/onlook

Length of output: 14845


🏁 Script executed:

# Search for all migration files to understand the structure
fd -e sql . apps/backend/supabase/migrations/ 2>/dev/null | head -20

Repository: onlook-dev/onlook

Length of output: 1202


🏁 Script executed:

# Look for canvas table policies mentioned in the scratchpad
rg "canvas" apps/backend/supabase/migrations/ -A 5 -B 2 --type sql 2>/dev/null

Repository: onlook-dev/onlook

Length of output: 18337


🏁 Script executed:

# Check for any other audit_logs references or RLS patterns
rg "audit_logs|SECURITY DEFINER" apps/backend/supabase/ -A 3 -B 2 --type sql 2>/dev/null | head -50

Repository: onlook-dev/onlook

Length of output: 3381


🌐 Web query:

PostgreSQL SECURITY DEFINER search_path security best practice

💡 Result:

Short best-practice checklist for SECURITY DEFINER + search_path (concise):

  1. Avoid SECURITY DEFINER unless needed; prefer SECURITY INVOKER or least-privilege roles. [1][6]
  2. Always set a safe search_path in the function definition (e.g. SET search_path = pg_catalog, pg_temp or trusted_schema, pg_temp) so writable/untrusted schemas are excluded/pg_temp is last. Prefer schema-qualified names for critical objects. [3][2][7]
  3. Do not rely on global/session search_path settings — enforce it per-function (CREATE FUNCTION ... SECURITY DEFINER SET search_path = ...). [3][2]
  4. Minimize the function owner's privileges; avoid making the owner a superuser. Use a dedicated low‑privilege role that owns only what the function needs. [6][5]
  5. Avoid or carefully sanitize dynamic SQL (EXECUTE, format()) inside SECURITY DEFINER functions. Validate inputs strictly. [1][6]
  6. Restrict EXECUTE: revoke PUBLIC then grant EXECUTE only to required roles (do create+privilege change in one transaction to avoid exposure window). [3]
  7. Revoke CREATE on public (or remove public from default search_path) and restrict who can create schemas/roles to reduce attack surface. Review database/user-level search_path settings. [4][7]
  8. Keep PostgreSQL and any vendor distributions patched for known search_path-related CVEs (e.g., CVE-2018-1058 family, CVE-2020-14349/14350, CVE-2023-2454/CVE-2023-41117) and audit existing SECURITY DEFINER functions after upgrades. [0][4][5][8]
  9. Periodically list and review SECURITY DEFINER functions and their owners/privileges (pg_proc / pg_roles). Monitor for unexpected writable schemas. [2][7]

Key references:

  • PostgreSQL docs: Writing SECURITY DEFINER functions safely. [3]
  • PostgreSQL search_path/security advisories & CVEs. [0][4][5][8]
  • Practical guides and analyses (abuse examples, mitigation patterns). [2][6]

If you want, I can:

  • scan your database (query list of SECURITY DEFINER functions + their SET search_path and owners) and show risky ones; or
  • produce a one‑line CREATE FUNCTION template that implements the recommended settings.

🌐 Web query:

Supabase audit logging table RLS security recommendations

💡 Result:

Short checklist — RLS for Supabase audit-log tables

  • Always enable RLS on any audit table. Create explicit policies before allowing anon/anon-like access. [1]. (supabase.com)
  • Make audit tables read-restricted: allow SELECT only to admin/service principals (check role via profiles/roles table or auth.uid() existence). Deny UPDATE/DELETE from non-admins (or use policies that only allow INSERT from app roles). [2][3]. (supadmin.dev)
  • Treat audit logs as write-once/append-only: disallow UPDATE/DELETE via RLS + DB-level constraints/triggers; store old/new JSON in the row on INSERT. [3]. (supabase.com)
  • Use Postgres roles with BYPASSRLS only for trusted backend/service accounts; never expose service keys to clients. Rotate & scope service keys. [4]. (supabase.com)
  • If policies need cross-table checks, use security-definer functions in a restricted schema (so policies don’t leak data) and test performance. [3]. (supabase.com)
  • Don’t rely on RLS alone for client-side filtering — add explicit filters in queries to avoid large scans and reduce policy evaluation cost. Index common columns (user_id, created_at, table_name). [3]. (supabase.com)
  • Auto-enforce RLS for tables created outside the dashboard (use an event trigger to enable RLS on CREATE TABLE) and alert on tables with RLS disabled. (Useful for CI/migrations.) [5]. (supaexplorer.com)
  • Keep audit/log storage and retention policies separate from DB when needed (Supabase platform/auth audit options); restrict who can export/drain logs. Review retention and plan-level limitations. [6]. (supabase.com)

If you want, I can:

  • generate example RLS policies (SELECT/INSERT/deny UPDATE/DELETE) for a concrete audit_log schema, or
  • produce event-trigger SQL to auto-enable RLS on new tables.

Citations:


Pin search_path in all three SECURITY DEFINER functions and explicitly lock down audit_logs table access.

Three functions in this migration are marked SECURITY DEFINER without pinned search_path: log_security_event() (line 32), user_has_project_access() (line 47), and user_has_canvas_access() (line 63). This is a known vulnerability; unbound search_path in SECURITY DEFINER functions can be exploited via search_path hijacking attacks. Additionally, audit_logs table has no RLS enabled and no explicit privilege controls—it will inherit default PUBLIC grants, potentially exposing sensitive audit data. The ip_address column is defined but never populated by the helper function.

Add SET search_path = pg_catalog, public; to all three function definitions:

$$ LANGUAGE plpgsql SECURITY DEFINER
SET search_path = pg_catalog, public;

For audit_logs, enable RLS and lock down access:

REVOKE ALL ON TABLE audit_logs FROM PUBLIC;
ALTER TABLE audit_logs ENABLE ROW LEVEL SECURITY;
CREATE POLICY "audit_logs_deny_all" ON audit_logs FOR ALL USING (false) WITH CHECK (false);
GRANT SELECT ON audit_logs TO service_role;  -- adjust role as needed

Schema-qualify the table reference in log_security_event(): change line 29 from INSERT INTO audit_logs to INSERT INTO public.audit_logs.

🤖 Prompt for AI Agents
In `@apps/backend/supabase/migrations/0006_rls.sql` around lines 1 - 33, The
SECURITY DEFINER functions log_security_event, user_has_project_access, and
user_has_canvas_access lack a pinned search_path and must be hardened: add "SET
search_path = pg_catalog, public;" to each function definition, schema-qualify
internal table references (e.g., change INSERT INTO audit_logs to INSERT INTO
public.audit_logs in log_security_event), and ensure the log_security_event
function populates ip_address (add an ip_address INET parameter or capture
inet_client_addr() and insert it into the audit record). Lock down the
audit_logs table by revoking public rights, enabling row level security,
creating a deny-all policy (FOR ALL USING (false) WITH CHECK (false)), and then
explicitly grant minimal required privileges (e.g., GRANT SELECT to the
service_role or appropriate role).

-- Helper function to check if user has specific roles for a project
CREATE OR REPLACE FUNCTION user_has_project_access(
project_id_param UUID,
Expand Down Expand Up @@ -78,6 +111,13 @@ FOR SELECT
TO authenticated
USING (user_has_project_access(canvas.project_id, ARRAY['owner', 'admin']));

DROP POLICY IF EXISTS "canvas_deny_select_policy" ON canvas;
-- Deny SELECT access to unauthenticated users
CREATE POLICY "canvas_deny_select_policy" ON canvas
FOR SELECT
TO public
USING (false);

Comment on lines +114 to +120
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

cd apps/backend/supabase/migrations && wc -l 0006_rls.sql

Repository: onlook-dev/onlook

Length of output: 77


🏁 Script executed:

head -n 150 apps/backend/supabase/migrations/0006_rls.sql | tail -n +100

Repository: onlook-dev/onlook

Length of output: 2057


🏁 Script executed:

cat -n apps/backend/supabase/migrations/0006_rls.sql | grep -A 50 "canvas"

Repository: onlook-dev/onlook

Length of output: 9750


🏁 Script executed:

find . -type f \( -name "*.ts" -o -name "*.js" -o -name "*.py" \) -not -path "*/node_modules/*" -not -path "*/.next/*" | head -20

Repository: onlook-dev/onlook

Length of output: 1127


🏁 Script executed:

rg "canvas" --type ts --type js -i -l | grep -E "(server|backend|api)" | head -10

Repository: onlook-dev/onlook

Length of output: 540


🏁 Script executed:

rg "from.*canvas|select.*canvas" --type ts --type js -i -A 2 | head -50

Repository: onlook-dev/onlook

Length of output: 3242


🏁 Script executed:

cat -n apps/web/client/src/server/api/routers/project/frame.ts | head -100

Repository: onlook-dev/onlook

Length of output: 3096


🏁 Script executed:

cat -n apps/web/client/src/server/api/routers/project/project.ts | head -80

Repository: onlook-dev/onlook

Length of output: 3487


🏁 Script executed:

rg "supabase|createClient" --type ts --type js apps/web/client/src/server -A 3 | head -60

Repository: onlook-dev/onlook

Length of output: 2555


🏁 Script executed:

find . -type f -name "client.ts" -path "*/db/*" | head -5

Repository: onlook-dev/onlook

Length of output: 88


🏁 Script executed:

cat -n packages/db/src/client.ts

Repository: onlook-dev/onlook

Length of output: 767


🏁 Script executed:

rg "createAdminClient|createClient" --type ts apps/web/client/src/utils/supabase -A 10

Repository: onlook-dev/onlook

Length of output: 6509


🏁 Script executed:

cat -n apps/web/client/src/server/api/trpc.ts | head -50

Repository: onlook-dev/onlook

Length of output: 1944


Consider using TO anon instead of TO public for clarity; however, note that the current policy does not affect internal backend operations.

The concern about privileged roles breaking is not applicable here—the backend accesses the database directly via a postgres connection (not RLS-aware), completely bypassing this policy. However, since the comment states the intent is to "Deny SELECT access to unauthenticated users," TO anon is more precise than TO public (which includes all roles). Use TO anon for clearer policy intent and better defense-in-depth practices.

Suggested adjustment
 CREATE POLICY "canvas_deny_select_policy" ON canvas
 FOR SELECT
-TO public
+TO anon
 USING (false);
🤖 Prompt for AI Agents
In `@apps/backend/supabase/migrations/0006_rls.sql` around lines 114 - 120, The
policy "canvas_deny_select_policy" currently targets TO public which is broader
than the intent; update the CREATE POLICY for "canvas_deny_select_policy" to
target TO anon instead of TO public so the rule clearly applies only to
unauthenticated users and matches the stated intent of denying SELECT to anon
users while preserving backend (postgres) access.

DROP POLICY IF EXISTS "canvas_update_policy" ON canvas;
-- 3. UPDATE: Allow users with 'owner' or 'admin' role in user_projects for the canvas's project
CREATE POLICY "canvas_update_policy" ON canvas
Expand Down
92 changes: 77 additions & 15 deletions apps/web/client/src/app/api/email-capture/route.ts
Original file line number Diff line number Diff line change
@@ -1,18 +1,69 @@
import { env } from '@/env';
import { z } from 'zod';

// Simple HTML sanitization function to prevent XSS
function sanitizeHtml(input: string): string {
return input
.replace(/&/g, '&')
.replace(/</g, '&lt;')
.replace(/>/g, '&gt;')
.replace(/"/g, '&quot;')
.replace(/'/g, '&#x27;')
.replace(/\//g, '&#x2F;');
}

// Rate limiter: simple in-memory store (in production, use Redis or similar)
const requestCounts = new Map<string, number[]>();
const RATE_LIMIT_WINDOW_MS = 60000; // 1 minute
const RATE_LIMIT_MAX_REQUESTS = 5; // 5 requests per minute

function isRateLimited(clientIp: string): boolean {
const now = Date.now();
const windowStart = now - RATE_LIMIT_WINDOW_MS;

if (!requestCounts.has(clientIp)) {
requestCounts.set(clientIp, []);
}

const timestamps = requestCounts.get(clientIp)!;
// Remove old timestamps outside the window
const recentTimestamps = timestamps.filter(t => t > windowStart);

if (recentTimestamps.length >= RATE_LIMIT_MAX_REQUESTS) {
return true;
}

recentTimestamps.push(now);
requestCounts.set(clientIp, recentTimestamps);
return false;
}
Comment on lines +15 to +39
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

In-memory rate limiter has limitations in serverless/multi-instance deployments.

The comment acknowledges this, but there are additional concerns:

  1. Memory leak: Old IP entries are never fully removed from the Map. While timestamps are filtered per-request, IPs that stop making requests remain in the Map forever.

  2. Serverless ineffectiveness: In Next.js on Vercel or similar, each invocation may be a fresh instance, making this rate limiter ineffective.

Add periodic cleanup to prevent memory growth
+// Cleanup old entries periodically (every 100 requests)
+let requestCounter = 0;
+const CLEANUP_INTERVAL = 100;
+
+function cleanupOldEntries(): void {
+    const now = Date.now();
+    const windowStart = now - RATE_LIMIT_WINDOW_MS;
+    for (const [ip, timestamps] of requestCounts.entries()) {
+        const recent = timestamps.filter(t => t > windowStart);
+        if (recent.length === 0) {
+            requestCounts.delete(ip);
+        } else {
+            requestCounts.set(ip, recent);
+        }
+    }
+}
+
 function isRateLimited(clientIp: string): boolean {
+    if (++requestCounter % CLEANUP_INTERVAL === 0) {
+        cleanupOldEntries();
+    }
     const now = Date.now();
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Rate limiter: simple in-memory store (in production, use Redis or similar)
const requestCounts = new Map<string, number[]>();
const RATE_LIMIT_WINDOW_MS = 60000; // 1 minute
const RATE_LIMIT_MAX_REQUESTS = 5; // 5 requests per minute
function isRateLimited(clientIp: string): boolean {
const now = Date.now();
const windowStart = now - RATE_LIMIT_WINDOW_MS;
if (!requestCounts.has(clientIp)) {
requestCounts.set(clientIp, []);
}
const timestamps = requestCounts.get(clientIp)!;
// Remove old timestamps outside the window
const recentTimestamps = timestamps.filter(t => t > windowStart);
if (recentTimestamps.length >= RATE_LIMIT_MAX_REQUESTS) {
return true;
}
recentTimestamps.push(now);
requestCounts.set(clientIp, recentTimestamps);
return false;
}
// Rate limiter: simple in-memory store (in production, use Redis or similar)
const requestCounts = new Map<string, number[]>();
const RATE_LIMIT_WINDOW_MS = 60000; // 1 minute
const RATE_LIMIT_MAX_REQUESTS = 5; // 5 requests per minute
// Cleanup old entries periodically (every 100 requests)
let requestCounter = 0;
const CLEANUP_INTERVAL = 100;
function cleanupOldEntries(): void {
const now = Date.now();
const windowStart = now - RATE_LIMIT_WINDOW_MS;
for (const [ip, timestamps] of requestCounts.entries()) {
const recent = timestamps.filter(t => t > windowStart);
if (recent.length === 0) {
requestCounts.delete(ip);
} else {
requestCounts.set(ip, recent);
}
}
}
function isRateLimited(clientIp: string): boolean {
if (++requestCounter % CLEANUP_INTERVAL === 0) {
cleanupOldEntries();
}
const now = Date.now();
const windowStart = now - RATE_LIMIT_WINDOW_MS;
if (!requestCounts.has(clientIp)) {
requestCounts.set(clientIp, []);
}
const timestamps = requestCounts.get(clientIp)!;
// Remove old timestamps outside the window
const recentTimestamps = timestamps.filter(t => t > windowStart);
if (recentTimestamps.length >= RATE_LIMIT_MAX_REQUESTS) {
return true;
}
recentTimestamps.push(now);
requestCounts.set(clientIp, recentTimestamps);
return false;
}


export async function POST(request: Request) {
try {
// Extract client IP for rate limiting
const clientIp = request.headers.get('x-forwarded-for')?.split(',')[0].trim() ||
request.headers.get('x-real-ip') ||
'unknown';

// Check rate limit
if (isRateLimited(clientIp)) {
return new Response(JSON.stringify({ error: 'Too many requests' }), {
status: 429,
headers: { 'Content-Type': 'application/json' }
});
}

const { name, email, utm_source, utm_medium, utm_campaign, utm_term, utm_content } = await request.json();

// Create Zod schema for validation
const emailCaptureSchema = z.object({
name: z.string().trim().min(1, 'Name is required'),
email: z.string().trim().email('Invalid email format'),
utm_source: z.string().optional(),
utm_medium: z.string().optional(),
utm_campaign: z.string().optional(),
utm_term: z.string().optional(),
utm_content: z.string().optional(),
name: z.string().trim().min(1, 'Name is required').max(255, 'Name is too long'),
email: z.string().trim().email('Invalid email format').max(255, 'Email is too long'),
utm_source: z.string().optional().max(255, 'utm_source is too long'),
utm_medium: z.string().optional().max(255, 'utm_medium is too long'),
utm_campaign: z.string().optional().max(255, 'utm_campaign is too long'),
utm_term: z.string().optional().max(255, 'utm_term is too long'),
utm_content: z.string().optional().max(255, 'utm_content is too long'),
});

// Validate input data with Zod
Expand All @@ -36,6 +87,17 @@ export async function POST(request: Request) {

const validatedData = validationResult.data;

// Sanitize string fields to prevent XSS
const sanitizedData = {
name: sanitizeHtml(validatedData.name),
email: validatedData.email, // Email is already validated and safe
utm_source: validatedData.utm_source ? sanitizeHtml(validatedData.utm_source) : undefined,
utm_medium: validatedData.utm_medium ? sanitizeHtml(validatedData.utm_medium) : undefined,
utm_campaign: validatedData.utm_campaign ? sanitizeHtml(validatedData.utm_campaign) : undefined,
utm_term: validatedData.utm_term ? sanitizeHtml(validatedData.utm_term) : undefined,
utm_content: validatedData.utm_content ? sanitizeHtml(validatedData.utm_content) : undefined,
};

const headerName = env.N8N_LANDING_FORM_HEADER_NAME;
const headerValue = env.N8N_LANDING_FORM_HEADER_VALUE;
const landingFormUrl = env.N8N_LANDING_FORM_URL;
Expand All @@ -49,14 +111,14 @@ export async function POST(request: Request) {
}

const url = new URL(landingFormUrl);
url.searchParams.append('name', validatedData.name);
url.searchParams.append('email', validatedData.email);

if (validatedData.utm_source) url.searchParams.append('utm_source', validatedData.utm_source);
if (validatedData.utm_medium) url.searchParams.append('utm_medium', validatedData.utm_medium);
if (validatedData.utm_campaign) url.searchParams.append('utm_campaign', validatedData.utm_campaign);
if (validatedData.utm_term) url.searchParams.append('utm_term', validatedData.utm_term);
if (validatedData.utm_content) url.searchParams.append('utm_content', validatedData.utm_content);
url.searchParams.append('name', sanitizedData.name);
url.searchParams.append('email', sanitizedData.email);

if (sanitizedData.utm_source) url.searchParams.append('utm_source', sanitizedData.utm_source);
if (sanitizedData.utm_medium) url.searchParams.append('utm_medium', sanitizedData.utm_medium);
if (sanitizedData.utm_campaign) url.searchParams.append('utm_campaign', sanitizedData.utm_campaign);
if (sanitizedData.utm_term) url.searchParams.append('utm_term', sanitizedData.utm_term);
if (sanitizedData.utm_content) url.searchParams.append('utm_content', sanitizedData.utm_content);

// Build auth headers: use custom header if provided
const authHeaders: Record<string, string> = {};
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,60 @@ import type {
DeploymentResponse
} from '@onlook/models';

// File upload validation constants
const MAX_FILE_SIZE = 50 * 1024 * 1024; // 50MB per file
const MAX_TOTAL_SIZE = 500 * 1024 * 1024; // 500MB total
const MAX_FILES = 1000;
const ALLOWED_FILE_TYPES = [
'text/html',
'text/css',
'application/javascript',
'text/javascript',
'application/json',
'image/jpeg',
'image/png',
'image/gif',
'image/svg+xml',
'application/pdf',
];

export class FreestyleAdapter implements HostingProviderAdapter {
private validateFileUploads(files: Record<string, Blob>): void {
// Validate number of files
const fileCount = Object.keys(files).length;
if (fileCount === 0) {
throw new Error('No files provided');
}
if (fileCount > MAX_FILES) {
throw new Error(`Too many files. Maximum allowed: ${MAX_FILES}`);
}

// Validate individual files and calculate total size
let totalSize = 0;
for (const [filename, file] of Object.entries(files)) {
// Validate file size
if (file.size > MAX_FILE_SIZE) {
throw new Error(`File "${filename}" exceeds maximum size of ${MAX_FILE_SIZE / 1024 / 1024}MB`);
}

// Validate file type
if (file.type && !ALLOWED_FILE_TYPES.includes(file.type)) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In validateFileUploads, files with missing or empty MIME types bypass type checks. Consider explicitly handling empty file.type if that’s undesired.

Suggested change
if (file.type && !ALLOWED_FILE_TYPES.includes(file.type)) {
if (!file.type || !ALLOWED_FILE_TYPES.includes(file.type)) {

throw new Error(`File type "${file.type}" for "${filename}" is not allowed`);
}

totalSize += file.size;
}

// Validate total size
if (totalSize > MAX_TOTAL_SIZE) {
throw new Error(`Total file size exceeds maximum of ${MAX_TOTAL_SIZE / 1024 / 1024}MB`);
}
}
Comment on lines +27 to +57
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Find the file and check its structure
cd apps/web/client/src/server/api/routers/domain/adapters && pwd && ls -la | head -20

Repository: onlook-dev/onlook

Length of output: 307


🏁 Script executed:

#!/bin/bash
# Look for DeploymentFile interface/type definition
rg -n 'interface DeploymentFile|type DeploymentFile' --type=ts -A 5

Repository: onlook-dev/onlook

Length of output: 429


🏁 Script executed:

#!/bin/bash
# Check the freestyle.ts file for the method signature and imports
cat -n apps/web/client/src/server/api/routers/domain/adapters/freestyle.ts | head -80

Repository: onlook-dev/onlook

Length of output: 3172


🏁 Script executed:

#!/bin/bash
# Find DeploymentRequest interface
rg -n 'interface DeploymentRequest|type DeploymentRequest' --type=ts -A 10

Repository: onlook-dev/onlook

Length of output: 724


🏁 Script executed:

#!/bin/bash
# Also check the freestyle-sandboxes SDK to understand what deployWeb expects
rg -n 'deployWeb' --type=ts -B 2 -A 2

Repository: onlook-dev/onlook

Length of output: 563


Critical type mismatch breaks entire file validation logic.

The method signature expects Record<string, Blob> but receives Record<string, DeploymentFile> (from DeploymentRequest.files). These types are incompatible:

  • DeploymentFile has content: string and optional encoding
  • Blob has size and type properties

As a result, the validation logic fails at runtime:

  • Line 41: file.size is undefined for DeploymentFile objects
  • Line 46: file.type is undefined, so the condition always passes, allowing all files through

This breaks both file size validation and file type allowlisting. The method must either accept and validate DeploymentFile objects directly (checking content length and encoding), or convert them to Blob objects before validation.

🤖 Prompt for AI Agents
In `@apps/web/client/src/server/api/routers/domain/adapters/freestyle.ts` around
lines 27 - 57, validateFileUploads currently types files as Record<string, Blob>
but DeploymentRequest.files supplies Record<string, DeploymentFile>, so change
validateFileUploads to accept Record<string, DeploymentFile> (or add an
overload) and perform validation on DeploymentFile fields: compute byte size
from file.content (if file.encoding === 'base64' compute decoded byte length via
base64 length formula or decode to bytes; otherwise use UTF-8 byte length),
enforce MAX_FILE_SIZE and MAX_TOTAL_SIZE using those computed sizes, and
validate file type/allowed list by either using a mime field on DeploymentFile
or by mapping known encodings to allowed types; alternatively, if you prefer
keeping Blob logic, convert each DeploymentFile to a Blob (new
Blob([decodedBytes], { type: inferredMime })) before running the existing
checks. Ensure you update the function signature and references to
validateFileUploads accordingly.


async deploy(request: DeploymentRequest): Promise<DeploymentResponse> {
// Validate file uploads before processing
this.validateFileUploads(request.files);

const sdk = initializeFreestyleSdk();

const res = await sdk.deployWeb(
Expand Down