- Node v20.16.0
- pnpm 10.6.2
The following utility classes are used for responsive headings throughout the project:
| Class | Typical Pixel Size (max) | Usage Example |
|---|---|---|
.text-heading-h1 |
64px | Hero Title |
.text-heading-h2 |
24px | Section titles |
.text-heading-h3 |
18x | subtile title (Latest job posts) |
.text-heading-h4 |
16px | Used Here "Entry level software developer jobs (Remote & Hybrid) |
.text-heading-h5 |
15px | Faq Titles |
.text-paragraph-section |
18px - Regular | Section lead paragraphs(For Hero TitleWrapper para) |
.text-paragraph |
15px - Regular | Inner Wrapper Text |
- These sizes are the maximum values as defined by the
clampfunction in the CSS. - The actual size will be responsive and may scale down on smaller screens.
- See
src/styles/globals.cssfor the exact clamp values and responsive behavior.
| Color Name | Hex Value |
|---|---|
Muted |
#f2f2f4 |
Secondary |
#f9f9f9 |
Foreground |
#1A1B25 |
Accent |
#fff0ce |
InternshipsHQ fetches job data from RapidAPI Active Jobs DB and stores it in PostgreSQL. The system is designed to work within a monthly budget of 20,000 requests and 20,000 jobs.
Cron Schedule → API Endpoints → Job Sync Service → Components → Database
↓
┌─────────────────┼─────────────────┐
↓ ↓ ↓
RapidAPI Client Job Processor Usage Tracker
↓ ↓ ↓
Fetch Jobs Transform & Upsert Track Budget
↓
LogSnag
(Monitoring)
- Handles all HTTP requests to RapidAPI
- 4 endpoints: Hourly (1h), Daily (1d), Modified, Expired
- 30-second timeout with abort handling
- Parses rate limit headers from API responses
- Returns: Job data + real-time usage info
- Transforms API format → Database format (68 fields)
- Upsert Strategy:
ON CONFLICT DO UPDATEprevents duplicates - Handles: NULL values, JSONB arrays, type conversions
- Soft deletes: Marks jobs as expired instead of deleting
- Manages monthly budget (20k requests, 20k jobs)
- Before API call: Checks if budget allows request
- After success: Records usage to database
- On failure: Logs error (doesn't count against budget)
- Important: Modified/Expired endpoints count towards Requests ONLY
- Sends events to LogSnag for monitoring
- Channels: jobs, budget, sync, errors
- Budget warnings at 80% usage
- Never throws - fails silently to avoid breaking sync
- Main orchestrator - coordinates all components
- Flow for each sync:
- Check budget (blocks if exceeded)
- Fetch from RapidAPI
- Process jobs (upsert)
- Record usage
- Log to LogSnag
- Return detailed result
All endpoints require authentication: Authorization: Bearer CRON_SECRET
| Endpoint | Purpose | Schedule | Budget Impact |
|---|---|---|---|
/api/cron/firehose |
Hourly jobs (1h) | Every hour | Requests + Jobs |
/api/cron/daily |
Daily jobs (24h) | Once daily | Requests + Jobs |
/api/cron/modified |
Modified jobs | Twice weekly | Requests ONLY |
/api/cron/expired |
Mark expired jobs | Daily at 2 AM | Requests ONLY |
/api/admin/backfill |
Manual sync (admin) | On-demand | Depends on endpoint |
Recommended Production Schedule:
| Endpoint | Schedule (UTC) | Frequency | Jobs/Call | Total Jobs/Month | Rationale |
|---|---|---|---|---|---|
/api/cron/firehose |
0 */4 * * * |
Every 4h | 100 | ~18,000 | Fresh jobs every 4 hours, no pagination needed |
/api/cron/daily |
0 6 * * * |
Daily 6AM | 200 (paginated) | ~6,000 | Catch-all for missed jobs, runs during low traffic |
/api/cron/modified |
0 2 * * 1,4 |
Mon & Thu | 500 | 0 | Update existing jobs twice weekly (doesn't count jobs) |
/api/cron/expired |
0 3 * * * |
Daily 3AM | 0 (IDs only) | 0 | Clean up expired jobs daily (doesn't count jobs) |
Budget Impact Analysis:
- Hourly: 6 calls/day × 30 days = 180 requests, ~18,000 jobs
- Daily: 30 requests, ~6,000 jobs (200 jobs/call × 30 days)
- Modified: 8 requests/month (2/week × 4 weeks), 0 jobs
- Expired: 30 requests/month, 0 jobs
- Total: ~248 requests/month, ~24,000 jobs/month
Since the above exceeds the 20k jobs limit, use this adjusted schedule:
| Endpoint | Schedule (UTC) | Frequency | Jobs/Call | Total Jobs/Month | Notes |
|---|---|---|---|---|---|
/api/cron/firehose |
0 */6 * * * |
Every 6h | 100 | ~12,000 | 4 calls/day = fresh jobs without exceeding budget |
/api/cron/daily |
0 6 * * * |
Daily 6AM | 200 (paginated) | ~6,000 | 2 API calls per sync (offset pagination) |
/api/cron/modified |
0 2 * * 1,4 |
Mon & Thu | 500 | 0 | Doesn't consume jobs budget - updates only |
/api/cron/expired |
0 3 * * * |
Daily 3AM | 0 (IDs only) | 0 | Doesn't consume jobs budget - IDs only |
Final Budget: ~278 requests/month, ~18,000 jobs/month ✅ (Within 20k limit)
Pagination Strategy:
- Hourly: Single call (limit=100, no pagination - API hard limit)
- Daily: Automatic pagination to fetch 200 jobs (2 calls with offset: 0, 100)
- Modified: Single call (limit=500, usually sufficient for 24h changes)
- Expired: Single call (returns all expired job IDs, no pagination supported)
Why This Works:
- Hourly firehose catches fresh jobs throughout the day (every 6 hours)
- Daily sync fills gaps with pagination (2 API calls for 200 jobs)
- Modified endpoint is budget-friendly (doesn't count towards jobs)
- Expired endpoint keeps database clean (doesn't count towards jobs)
- Total usage stays comfortably under 20k requests + 20k jobs limit
Monthly Limits: 20,000 requests | 20,000 jobs
Key Rules:
- Hourly/Daily endpoints: Count towards BOTH Requests AND Jobs
- Modified/Expired endpoints: Count towards Requests ONLY (NOT Jobs)
- Failed API calls: Don't count against budget
- Upsert: Running sync twice doesn't create duplicates
Why This Matters:
- Can make 20,000 Modified calls/month without burning Jobs budget
- Expired endpoint can run daily without affecting Jobs count
- Budget resets automatically each month (query-based, no manual reset)
| Table | Purpose |
|---|---|
jobs |
Job listings (68 fields) |
api_usage_logs |
Tracks all API calls + usage |
pending_payments |
Payment sync (guest → user) |
email_subscribers |
Daily digest subscribers |
alert_preferences |
Custom job alerts (pro users) |
Query these directly from database manager:
api_usage_monthly- Monthly usage summaryapi_usage_daily- Daily breakdown by endpointjob_stats_daily- Daily job statisticsjob_stats_by_source- Stats grouped by sourcebudget_status_current- Real-time budget at a glance
Problem: Running sync twice would create duplicate jobs
Solution: ON CONFLICT DO UPDATE
await db.insert(jobs).values(job).onConflictDoUpdate({
target: jobs.id, // Conflict on job ID
set: {
/* update all 68 fields */
},
});Result: If job exists, updates it. If new, inserts it. No duplicates!
- Budget exceeded: Blocks API call, sends alert to LogSnag
- API timeout: Aborts after 30 seconds, logs error
- Failed requests: Recorded but don't count against budget
- LogSnag failures: Logged to console, doesn't break sync
- Budget check:
GET /api/admin/backfill(no API calls) - Small test:
POST /api/admin/backfillwithlimit: 10 - Upsert test: Run same request twice, verify
inserted: 0, updated: 10 - Monitor: Check database views and LogSnag dashboard
- Set
CRON_SECRETin environment - Configure Railway cron jobs with schedules
- Monitor via database views and LogSnag
- Budget warnings trigger at 80% usage
- Types:
src/lib/types.ts - Schema:
src/server/db/schema/business.ts - Migrations:
src/server/db/migrations/ - Cron Endpoints:
src/app/api/cron/ - Admin API:
src/app/api/admin/backfill/route.ts
For detailed implementation guide, see mds/MILESTONE_2_COMPLETE.md
InternshipsHQ uses NextAuth.js v5 with three authentication providers:
- Email/Password (Credentials) - For traditional signup/login
- Google OAuth - For Google sign-in
Signup Flow:
- User calls
/api/auth/signupwith email + password (min 6 characters) - System checks if email already exists → Error if yes
- Password is hashed using bcrypt (10 salt rounds)
- User record created with
emailVerifiedset to current timestamp (no verification email sent) - Database trigger checks for completed payments and auto-grants Pro status
- User then signs in via credentials provider
Login Flow:
- User provides email + password via NextAuth credentials provider
- System checks if user exists → Error if no
- If user has no password (OAuth user), returns error message
- Verifies password using bcrypt
- Creates session on success
Key Files:
/src/app/api/auth/signup/route.ts- Signup API endpoint/src/server/auth/config.ts- NextAuth configuration (login only)/src/lib/password.ts- Password hashing utilities (bcrypt)
Setup:
- Requires
AUTH_GOOGLE_IDandAUTH_GOOGLE_SECRETenvironment variables - OAuth flow handled by NextAuth GoogleProvider
- User profile data auto-populated from Google
To Get Google OAuth Credentials:
- Go to Google Cloud Console
- Create new project or select existing
- Enable Google+ API
- Create OAuth 2.0 credentials
- Set authorized redirect URI:
{YOUR_DOMAIN}/api/auth/callback/google
Setup:
- Already configured with
AUTH_DISCORD_IDandAUTH_DISCORD_SECRET - OAuth flow handled by NextAuth DiscordProvider
Problem: User can purchase Pro plan BEFORE creating an account (guest checkout).
Solution: Database trigger automatically links completed payments to new accounts.
Flow:
- Guest purchases Pro plan → Payment marked as
completedinpending_paymentstable - User signs up with same email → Trigger fires
- Trigger finds matching
completedpayment - Sets
hasPro = trueandproPurchasedAton user record - Marks payment as linked (
syncedAttimestamp updated)
Migration File: /src/server/db/migrations/update_payment_link_trigger.sql
Trigger Details:
- Function:
sync_pending_payment_on_user_insert() - Fires: AFTER INSERT on
userstable - Looks for: Payments with
status = 'completed'(case-insensitive email match) - Race condition protection: Uses
FOR UPDATElock - Order: Most recent payment first (
ORDER BY created_at DESC)
Users Table Fields:
id- UUID primary keyemail- Unique, not nullpassword- VARCHAR(255), nullable (null for OAuth users)name,image- Profile data (from OAuth or manual)emailVerified- Timestamp (set immediately, no verification email)hasPro- Boolean (upgraded via payment linking)proPurchasedAt- Timestamp of Pro upgradestripeCustomerId,stripeSubscriptionId- For subscription managementsubscriptionStatus,subscriptionPeriodEnd- Alert subscription tracking
Migration File: /src/server/db/migrations/add_password_field_to_users.sql
Session Callback: Enriches session with full user data from database.
Session Object Includes:
user.id- User UUIDuser.email,user.name,user.image- Profileuser.hasPro- Pro plan status (boolean)user.proPurchasedAt- When Pro was purchaseduser.stripeCustomerId- For customer portal accessuser.subscriptionStatus- Alert subscription status- All other user table fields
- Password Hashing: Bcrypt with 10 salt rounds (industry standard)
- Minimum Password Length: 6 characters (enforced by Zod schema)
- OAuth Separation: OAuth users cannot use email/password login
- Case-Insensitive Email: Database trigger and lookups handle case variations
- Race Condition Protection: Payment linking uses
FOR UPDATElock
# NextAuth
AUTH_SECRET=<random-secret-string>
# Google OAuth
AUTH_GOOGLE_ID=<google-client-id>
AUTH_GOOGLE_SECRET=<google-client-secret>- Signup API:
/src/app/api/auth/signup/route.ts - Auth Config:
/src/server/auth/config.ts(login only) - Password Utils:
/src/lib/password.ts - User Schema:
/src/server/db/schema/users.ts - Migrations:
/src/server/db/migrations/add_password_field_to_users.sqlupdate_payment_link_trigger.sqlsync_pending_payment_trigger.sql(original trigger - replaced by update)
Test Email/Password Signup:
- Make sure
bcryptjsis installed:pnpm add bcryptjs @types/bcryptjs - Run migrations to add password field
- Call
/api/auth/signupwith email + password - Verify password is hashed in database (should start with
$2a$or$2b$) - Sign in with same credentials using NextAuth
Test Payment Linking:
- Create payment with status
completedfor test email - Sign up with same email
- Verify user has
hasPro = trueandproPurchasedAtset - Verify payment has
syncedAttimestamp
Test OAuth:
- Ensure Google/Discord OAuth credentials are set
- Test OAuth login flow
- Verify user profile data populated from provider
For detailed authentication implementation, see mds/MILESTONE_5_AUTHENTICATION.md