Conversation
|
Warning Rate limit exceeded@Blaumaus has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 20 minutes and 18 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (3)
📝 WalkthroughWalkthroughAdds a large set of backend features (AI chat with streaming and persistence, analytics/profile/session refactor, revenue integrations, experiments, feature flags, goals), replaces CAPTCHA with PoW flow, adds pinning/OG improvements, many DTO/entity/controller changes, and removes marketplace modules. Adds request-metadata propagation and telemetry across controllers. Changes
Sequence Diagram(s)sequenceDiagram
rect rgba(0,128,128,0.06)
participant Client
participant AiController
participant AiService
participant OpenRouter
participant AiChatService
participant DB
end
Client->>AiController: POST /ai/:pid/chat (messages, timezone, chatId?)
AiController->>AiController: validate access, rate-limit, billing, check OPENROUTER_API_KEY
AiController->>AiService: chat(project, messages, timezone)
AiService->>OpenRouter: stream request (system prompt + tools)
OpenRouter-->>AiService: stream events (deltas, tool_calls, results, errors)
AiService->>AiChatService: persist/update chat record
AiChatService->>DB: save AiChat
AiService-->>AiController: stream events
AiController-->>Client: SSE chunks (deltas, tool_calls, errors, finish)
sequenceDiagram
rect rgba(128,64,64,0.06)
participant Client
participant HeartbeatGateway
participant AnalyticsService
participant SaltService
participant Redis
participant ClickHouse
participant DB
end
Client->>HeartbeatGateway: WebSocket connect (pid, headers)
HeartbeatGateway->>SaltService: getSaltForSession()
SaltService->>Redis: GET salt:daily
alt cached
Redis-->>SaltService: salt
else
SaltService->>DB: read/upsert Salt
SaltService->>Redis: SET salt:daily
end
HeartbeatGateway->>AnalyticsService: getSessionId / generateProfileId
AnalyticsService->>Redis: NX-SET psid TTL / extend TTL
AnalyticsService->>ClickHouse: INSERT session activity (psid, profileId, pid, ts)
ClickHouse-->>AnalyticsService: OK
HeartbeatGateway-->>Client: connected + periodic heartbeat acks
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Poem
Pre-merge checks and finishing touches❌ Failed checks (2 inconclusive)
✅ Passed checks (1 passed)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Feature/adjustable salt setting
UI updates
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (10)
backend/apps/cloud/src/captcha/captcha.controller.ts (1)
25-38: Consider externalizing rate limit constants to configuration.The rate limit thresholds are hardcoded at the module level. While functional, consider moving these to a centralized configuration file or environment variables for easier tuning across environments without code changes.
💡 Example configuration approach
You could define these in a config service or environment-based configuration:
// captcha.config.ts export const captchaConfig = { rateLimit: { generate: { ip: { requests: 30, timeout: 60 }, pid: { requests: 100, timeout: 60 }, }, verify: { ip: { requests: 60, timeout: 60 }, }, validate: { ip: { requests: 120, timeout: 60 }, }, }, }backend/apps/cloud/src/ai/ai-chat.service.ts (4)
21-38: Consider throwing an exception instead of silently returning empty array.Lines 26-28 return an empty array when
userIdis null. This silent failure mode may mask issues where a caller expects data but receives an empty result. Consider either:
- Throwing a
BadRequestExceptionorUnauthorizedExceptionto make the failure explicit- Documenting this behavior clearly in JSDoc comments
The same applies to
findAllByProject(lines 46-48).🔎 Proposed fix: throw explicit exception
async findRecentByProject( projectId: string, userId: string | null, limit: number = 5, ): Promise<AiChat[]> { if (!userId) { - return [] + throw new UnauthorizedException('Authentication required to access chat history') } const queryBuilder = this.aiChatRepository
84-93: Add comment to clarify auto-naming preservation logic.Lines 84-89 implement logic to preserve user-customized chat names while auto-regenerating default names. This is good design but the intent isn't immediately clear. Consider adding a comment:
// Only auto-regenerate name if it was previously auto-generated // (preserves user-customized names)
103-110: Extract magic number to named constant.Line 107 uses the magic number
100for max chat name length. Consider extracting this to a named constant at the class level for better maintainability:private readonly MAX_CHAT_NAME_LENGTH = 100
112-129: Add JSDoc to clarify authenticated-user-only behavior.This method returns
nullfor anonymous users (line 117), meaning only authenticated users can use it to access chats. The method nameverifyAccessdoesn't communicate this restriction clearly. Consider adding JSDoc:/** * Verifies authenticated user access to a chat. * Returns null for anonymous users (use verifyPublicChatAccess instead). * Allows access to user-owned chats or shared (NULL owner) chats. */backend/apps/cloud/src/ai/ai.controller.ts (5)
41-46: Consider separate timeout for read operations.Lines 45-46 define read rate limits but reuse
AI_CHAT_RATE_LIMIT_TIMEOUTfrom line 43. Read operations typically have different timeout characteristics than write operations. Consider defining:const AI_READ_RATE_LIMIT_TIMEOUT = 3600This makes the rate limit configuration more explicit and easier to tune independently.
122-134: Extract message transformation to a private helper method.Lines 122-134 contain message filtering, mapping, and validation logic that could be reused. Consider extracting to a private method:
private prepareMessages(chatDto: ChatDto): ChatMessage[] { const messages = chatDto.messages .filter(m => m.content && m.content.trim().length > 0) .map(m => ({ role: m.role as 'user' | 'assistant', content: m.content, })) if (messages.length === 0) { throw new HttpException( 'At least one message with content is required', HttpStatus.BAD_REQUEST, ) } return messages }
151-276: Extract stream processing to a private method.The stream processing loop (lines 151-276) is over 100 lines and handles multiple event types. This makes the
chatmethod difficult to read and test. Consider extracting to a private method:private async processAiStream( result: any, res: Response, pid: string, uid: string | null, ip: string, headers: Record<string, string> ): Promise<void>Additionally, the type casting at lines 224, 232-235, 242-244 suggests incomplete type definitions for stream parts. Consider defining proper types for the
finishandstep-finishevents.
457-465: Access control logic is correct but complex.Lines 457-465 implement conditional access checks based on project visibility and authentication status:
- Public projects with no auth:
verifyPublicChatAccess(NULL owner chats only)- Public projects with auth:
verifyAccess(user-owned or NULL owner chats)- Private projects:
verifyOwnerAccess(strict ownership)This logic is correct but the nested conditionals make it hard to verify at a glance. Consider extracting to a helper method or adding a comment explaining the access matrix.
587-594: Simplify redundant null check.Lines 587-594 call
updateand then check if the result is null. However, line 81-82 inai-chat.service.tsalready returnsnullif the chat isn't found. The check at lines 577-585 already verifies existence, so the update should never fail to find the chat unless there's a race condition.Consider handling this more explicitly:
const chat = await this.aiChatService.update(chatId, { messages: updateChatDto.messages, name: updateChatDto.name, }) // Should not happen since we verified access above if (!chat) { throw new HttpException( 'Chat was deleted during update', HttpStatus.CONFLICT, ) }
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
backend/apps/cloud/src/ai/ai-chat.service.tsbackend/apps/cloud/src/ai/ai.controller.tsbackend/apps/cloud/src/analytics/analytics.service.tsbackend/apps/cloud/src/captcha/captcha.controller.ts
🧰 Additional context used
🧬 Code graph analysis (1)
backend/apps/cloud/src/ai/ai-chat.service.ts (1)
backend/apps/cloud/src/ai/entity/ai-chat.entity.ts (1)
ChatMessage(14-17)
🔇 Additional comments (5)
backend/apps/cloud/src/captcha/captcha.controller.ts (1)
109-122: No security concern here. The dummy PIDs (AP00000000000andFAIL000000000) are 13 characters, while thePID_REGEXrequires exactly 12 characters (/^(?!.*--)[a-zA-Z0-9-]{12}$/). Legitimate projects cannot be created with PIDs matching these values, so collision is impossible.backend/apps/cloud/src/ai/ai-chat.service.ts (4)
131-148: LGTM - Clear ownership verification with good documentation.The JSDoc comment clearly explains the purpose and the implementation correctly enforces strict ownership checks for mutating operations.
150-164: LGTM - Good privacy-preserving design.The JSDoc clearly explains the share-by-id pattern, and the implementation correctly prevents enumeration while allowing direct access to anonymous chats.
166-180: LGTM - Project-scoped access check is correct.Implementation matches the documented intent for project-level shared access.
62-75: Validation of project and user references is already performed at the controller level.The controller validates that the project exists (throwing
NotFoundExceptionif not) before calling this service method. Additionally, the authenticateduserIdis guaranteed to exist. Database-level foreign key constraints via@ManyToOnerelationships will enforce referential integrity if an edge case occurs (e.g., project deleted between validation and save), so the current implementation is safe.No changes needed.
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (2)
backend/apps/cloud/src/experiment/experiment.service.ts (1)
99-108: Consider improving return type specificity.The
updatemethod returnsPromise<any>, which loses type safety. TypeORM'supdatereturnsUpdateResult, so the return type could bePromise<UpdateResult>for better type checking and IDE support.🔎 Proposed refinement
+import { UpdateResult } from 'typeorm' async update( id: string, experimentData: Partial<Experiment>, manager?: EntityManager, - ): Promise<any> { + ): Promise<UpdateResult> { const repository = manager ? manager.getRepository(Experiment) : this.experimentRepository return repository.update(id, experimentData) }Apply similar changes to
updateVariantand other update methods.backend/apps/cloud/src/experiment/experiment.controller.ts (1)
766-774: Consider wrapping flag and status updates in a transaction.The feature flag update (line 767) and experiment status update (line 772) are separate operations. If the experiment update fails after the flag is disabled, the system will be in an inconsistent state (flag disabled, experiment still marked as running).
🔎 Suggested transactional wrapping
- if (experiment.featureFlag) { - await this.featureFlagService.update(experiment.featureFlag.id, { - enabled: false, - }) - } - - await this.experimentService.update(id, { - status: ExperimentStatus.PAUSED, - }) + await this.dataSource.transaction(async transactionalEntityManager => { + if (experiment.featureFlag) { + await this.featureFlagService.update( + experiment.featureFlag.id, + { enabled: false }, + transactionalEntityManager, + ) + } + + await this.experimentService.update( + id, + { status: ExperimentStatus.PAUSED }, + transactionalEntityManager, + ) + })Apply the same pattern to
completeExperimentat lines 825-834.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (14)
backend/apps/cloud/src/experiment/experiment.controller.tsbackend/apps/cloud/src/experiment/experiment.service.tsbackend/apps/cloud/src/project/dto/project.dto.tsbackend/apps/cloud/src/project/entity/project.entity.tsbackend/apps/cloud/src/project/project.controller.tsbackend/apps/community/src/project/dto/project.dto.tsbackend/apps/community/src/project/entity/project.entity.tsbackend/apps/community/src/project/project.controller.tsbackend/migrations/clickhouse/initialise_selfhosted.jsbackend/migrations/clickhouse/selfhosted_2025_12_24_website_url.jsbackend/migrations/mysql/2025_12_24_website_url.sqlweb/app/api/index.tsweb/app/lib/models/Project.tsweb/app/pages/Dashboard/ProjectCard.tsx
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Analyze (javascript-typescript)
🔇 Additional comments (26)
backend/apps/cloud/src/project/entity/project.entity.ts (3)
63-72: CAPTCHA configuration looks good.The type mismatch from the previous review has been resolved—
captchaSecretKeyis now correctly typed asstring | nullto match the nullable column definition. ThecaptchaDifficultyfield with a tinyint type and default of 4 is appropriate for proof-of-work difficulty levels.
110-120: New entity relations are well-defined.The OneToMany relationships for goals, feature flags, and experiments are properly configured with appropriate decorators and types.
160-181: Revenue integration and websiteUrl fields are correctly defined.All new fields are properly typed as nullable and have appropriate column definitions. The
revenueCurrencyfield withvarchar(3)aligns with ISO 4217 currency code standards, and the encrypted API key fields follow a clear naming convention.backend/apps/cloud/src/project/project.controller.ts (10)
188-198: Pagination logic simplified effectively.The removal of mode-based branching in favor of a unified pagination path improves maintainability while preserving the sort parameter.
228-260: Pin/unpin endpoints are well-implemented.The new pin/unpin functionality properly validates project IDs, checks permissions, and tracks analytics events. Using
allowedToViewfor pin permissions is appropriate since pinning is a user-specific preference rather than a project management action.
387-389: Analytics tracking for project creation is correctly implemented.The
PROJECT_CREATEDevent is tracked after successful project creation with proper IP and user-agent metadata.
450-455: Funnel analytics includes valuable metadata.Tracking the number of steps in funnel creation/deletion events provides useful insights for understanding feature usage patterns.
981-989: JSON parsing error handling is appropriate.The try-catch block around
JSON.parsewith a clear error message prevents unhandled exceptions from malformed filter input.
1185-1195: Critical security validation added for share acceptance.The new check ensures that only the user for whom the share token was created can accept it, preventing unauthorized access through stolen or intercepted tokens. This is an essential security improvement.
1605-1607: Project deletion analytics provides useful insights.Tracking project deletions helps understand user churn and retention patterns.
1835-1839: WebsiteUrl update logic is correct.The conditional logic properly handles trimming and null conversion for the
websiteUrlfield, with the!== undefinedcheck ensuring the field is only updated when explicitly provided.
1925-1934: CAPTCHA data existence check is well-integrated.The new
isCaptchaDataExistscheck follows the same pattern as the existing analytics and error data checks, maintaining consistency across the codebase.
2042-2049: Project view analytics captures useful context.The segment creation event includes valuable metadata about view type, filters, and custom events, enabling detailed analysis of how users organize their data.
backend/apps/cloud/src/experiment/experiment.service.ts (4)
1-20: LGTM!The service setup follows standard NestJS patterns with proper TypeORM repository injections. Import formatting has been addressed per previous review feedback.
22-87: LGTM!The read operations are well-implemented with proper query parameterization, relation loading, and search functionality. The use of QueryBuilder with left joins ensures efficient data retrieval.
114-141: LGTM!Variant CRUD operations are straightforward and follow standard repository patterns. The
deleteVariantsByExperimentmethod properly queries by relation.
143-174: Excellent transactional implementation!The
recreateVariantsmethod properly handles atomicity by wrapping the delete-and-recreate operation in a transaction. The dual-mode support (accepting an external manager or creating its own transaction) enables both standalone usage and composition within larger transactional operations.backend/apps/cloud/src/experiment/experiment.controller.ts (9)
1-88: LGTM!Controller setup is comprehensive with proper dependency injection including DataSource for transaction management. The use of multiple services reflects the complex domain logic around experiments.
90-140: LGTM!The pagination endpoint properly validates and sanitizes input parameters, enforces permissions, and cleanly maps internal entities to DTOs. The take/skip bounds prevent excessive data retrieval.
173-372: Well-implemented with proper transactional safety!The create endpoint demonstrates excellent practices:
- Comprehensive validation (variants, percentages, goal, feature flag)
- Rate limiting and billing checks
- Lines 337-356: Transactional wrapping ensures experiment creation and feature flag update are atomic
This addresses the transactional safety concerns from previous reviews.
374-561: Excellent transactional implementation!The update endpoint properly wraps all related operations in a transaction:
- Variant recreation via
recreateVariants(lines 455-459)- Feature flag mode switching (lines 500-516)
- Experiment property updates
This ensures atomicity and addresses all previous transactional safety concerns.
563-608: LGTM!The delete operation properly handles feature flag cleanup based on the link mode. While feature flag cleanup happens before experiment deletion (potential inconsistency if deletion fails), this is acceptable since an orphaned flag with no experiment is relatively harmless.
610-731: Excellent transactional safety in start operation!The start endpoint properly addresses previous concerns:
- Lines 654-711: Transaction ensures feature flag creation/enablement and experiment status update are atomic
- Validates goal existence before starting
- Handles both CREATE and LINK feature flag modes
- Prevents invalid state transitions
847-1105: LGTM! Previous concerns addressed.The results endpoint is well-implemented:
- Lines 1067-1072: Guard against empty
variantResultsprevents the runtime error fromreduceon empty arrays- ClickHouse queries are properly parameterized
- Bayesian probability calculations are integrated
- Error handling for query failures
The empty array guard addresses the concern from previous reviews.
1110-1322: LGTM!The chart generation logic is sophisticated and correct:
- Uses subqueries with
min(created)to ensure unique profile counting across time buckets- Cumulative exposure/conversion tracking preserves data integrity
- Properly handles sparse data with safe defaults
The comment at lines 1126-1131 clearly explains the rationale for the subquery approach.
1324-1403: LGTM!Helper methods for time bucketing and date formatting are clean and maintainable. The switch statements provide clear mappings for each time bucket type, and the date formatting includes proper zero-padding.
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (2)
backend/apps/cloud/src/goal/goal.controller.ts (2)
219-224: This issue was previously flagged: Incorrect HTTP status code for goal limit.As noted in the previous review,
PAYMENT_REQUIRED(402) is inappropriate for a resource limit. UseHttpStatus.FORBIDDEN(403) instead.
331-364: This issue was previously flagged: Missing REGEX match type handling.As noted in the previous review, the
GoalMatchType.REGEXcase is not explicitly handled and will fall through toCONTAINSlogic, causing incorrect behavior.
🧹 Nitpick comments (17)
backend/apps/cloud/src/organisation/organisation.controller.ts (3)
167-168: Inconsistent type annotations forheadersandreqIPparameters.Other methods in this controller (
create,delete) use explicit typesRecord<string, string>andstringfor these parameters. For consistency and type safety, consider adding the same annotations here.Suggested fix
async inviteMember( @Param('orgId') orgId: string, @Body() inviteDTO: InviteMemberDTO, @CurrentUserId() userId: string, - @Headers() headers, - @Ip() reqIP, + @Headers() headers: Record<string, string>, + @Ip() reqIP: string, ): Promise<Organisation> {
394-397: Inconsistent error logging - usesconsole.errorinstead ofthis.logger.error.The
inviteMembermethod was updated to use structured logging withthis.logger.error, but this error handler still usesconsole.error. For consistency and better observability, consider updating this to use the structured logger.Suggested fix
} catch (reason) { - console.error('[ERROR] Failed to delete organisation:', reason) + this.logger.error( + { orgId, userId, reason }, + 'Failed to delete organisation', + ) throw new BadRequestException('Failed to delete organisation') }
430-433: Same inconsistency - usesconsole.errorinstead of structured logging.Similar to the delete method, this should use
this.logger.errorfor consistency with the improved logging pattern ininviteMember.Suggested fix
} catch (reason) { - console.error('[ERROR] Failed to update organisation:', reason) + this.logger.error( + { orgId, userId, reason }, + 'Failed to update organisation', + ) throw new BadRequestException('Failed to update organisation') }backend/apps/cloud/src/alert/alert.controller.ts (1)
136-136: Consider skipping analytics tracking when IP is unavailable.The IP fallback chain defaults to an empty string when both
getIPFromHeadersandrequestIpreturn null/undefined. WhiletrackCustomwon't crash (it's wrapped in try-catch), tracking events with an empty IP may produce low-quality analytics data or be rejected by the analytics backend.Suggested approach
const ip = getIPFromHeaders(headers) || requestIp || '' +if (ip) { await trackCustom(ip, headers['user-agent'], { ev: 'ALERT_CREATED', meta: { ... }, }) +} else { + this.logger.warn('Skipping analytics tracking: IP unavailable') +}Apply similar logic to the delete path (lines 352, 368-370).
Also applies to: 239-247, 352-352, 368-370
backend/apps/cloud/src/goal/goal.controller.ts (6)
152-156: Add explanation for type error suppression.The
@ts-expect-errordirective on Line 152 lacks an explanation. Document why the type assertion is necessary (e.g., "Type mismatch after transforming results to omit project field").🔎 Suggested improvement
- // @ts-expect-error + // @ts-expect-error - Transforming results shape to replace nested project with pid result.results = _map(result.results, goal => ({ ..._omit(goal, ['project']), pid: goal.project.id, }))
484-485: Consider handling zero sessions explicitly.Defaulting
totalSessionsto 1 prevents division by zero but will produce a misleading conversion rate (e.g., 1/1 = 100%) when there are actually zero sessions. Consider returning a rate of 0 or null whentotalSessionsis 0.🔎 Suggested improvement
const totalSessions = totalData[0]?.totalSessions || 1 - const conversionRate = _round((uniqueSessions / totalSessions) * 100, 2) + const conversionRate = totalSessions > 0 + ? _round((uniqueSessions / totalSessions) * 100, 2) + : 0
509-517: Simplify trend calculation for clarity.The nested ternary for trend calculation is complex. When
previousConversionsis 0 butconversionsis greater than 0, returning 100% might be confusing to users. Consider documenting this behavior or simplifying the logic.
528-561: Consider using a map for time bucket SQL fragments.The if-else chain in
getGroupSubquerycould be replaced with a lookup map for better maintainability and clarity.🔎 Suggested refactor
private getGroupSubquery( timeBucket: string, ): [selector: string, groupBy: string] { const bucketConfig: Record<string, [string, string]> = { minute: [ 'toYear(tz_created) as year, toMonth(tz_created) as month, toDayOfMonth(tz_created) as day, toHour(tz_created) as hour, toMinute(tz_created) as minute', 'year, month, day, hour, minute', ], hour: [ 'toYear(tz_created) as year, toMonth(tz_created) as month, toDayOfMonth(tz_created) as day, toHour(tz_created) as hour', 'year, month, day, hour', ], day: [ 'toYear(tz_created) as year, toMonth(tz_created) as month, toDayOfMonth(tz_created) as day', 'year, month, day', ], month: [ 'toYear(tz_created) as year, toMonth(tz_created) as month', 'year, month', ], year: ['toYear(tz_created) as year', 'year'], } return bucketConfig[timeBucket] || bucketConfig['year'] }
635-642: Avoid type assertions withas any.Lines 635 and 642 use
as anyto bypass type checking. Consider defining a proper type forresolvedTimeBucketor adjusting the method signatures to accept string types for time buckets.
649-654: Simplify hasOwnProperty check.The verbose
Object.prototype.hasOwnProperty.call()can be simplified sincetimeBucketConversionis a known object.🔎 Suggested improvement
- const timeBucketFunc = Object.prototype.hasOwnProperty.call( - timeBucketConversion, - resolvedTimeBucket, - ) - ? timeBucketConversion[resolvedTimeBucket] - : 'toStartOfDay' + const timeBucketFunc = + timeBucketConversion[resolvedTimeBucket] || 'toStartOfDay'backend/apps/cloud/src/auth/auth.service.ts (2)
304-311: LGTM! Consistent TTL-based token expiration.The TTL checks are consistently implemented across all token types. Expired tokens are properly cleaned up by deletion.
The type cast
as unknown as Datesuggests a mismatch between TypeORM entity types and runtime values. This could be improved by ensuring the entity definition properly types thecreatedfield, but it's not critical for this change.Also applies to: 350-357, 436-443
745-763: LGTM! Good security validation for SSO provider consistency.The helper correctly validates that the provider parameter matches the session hash, preventing potential provider confusion attacks.
The method returns
SSOProvidersbut callers don't use the return value (lines 653, 795). Consider making the return typevoidfor clarity, unless you plan to use the return value in the future.🔎 Proposed simplification
- private assertSSOProviderMatchesHash( - provider: SSOProviders, - ssoHash: string, - ): SSOProviders { + private assertSSOProviderMatchesHash( + provider: SSOProviders, + ssoHash: string, + ): void { const derivedProvider = getSSOSessionProvider(ssoHash) if ( derivedProvider !== SSOProviders.GOOGLE && derivedProvider !== SSOProviders.GITHUB ) { throw new BadRequestException('Invalid SSO session identifier') } if (provider !== derivedProvider) { throw new BadRequestException('SSO provider does not match session hash') } - - return derivedProvider }backend/apps/cloud/src/experiment/experiment.controller.ts (2)
733-785: Consider wrapping feature flag and experiment updates in a transaction.The pause operation updates the feature flag and experiment status separately without a transaction. If the experiment update fails after disabling the feature flag, the system will be in an inconsistent state (flag disabled but status unchanged).
🔎 Suggested transactional wrapper
+ await this.dataSource.transaction(async transactionalEntityManager => { if (experiment.featureFlag) { - await this.featureFlagService.update(experiment.featureFlag.id, { + await this.featureFlagService.update( + experiment.featureFlag.id, + { enabled: false, - }) + }, + transactionalEntityManager, + ) } - await this.experimentService.update(id, { - status: ExperimentStatus.PAUSED, - }) + await this.experimentService.update( + id, + { + status: ExperimentStatus.PAUSED, + }, + transactionalEntityManager, + ) + })
787-845: Consider wrapping feature flag and experiment updates in a transaction.The complete operation updates the feature flag and experiment status separately without a transaction. If the experiment update fails after disabling the feature flag, the system will be in an inconsistent state (flag disabled but status unchanged).
🔎 Suggested transactional wrapper
+ await this.dataSource.transaction(async transactionalEntityManager => { if (experiment.featureFlag) { - await this.featureFlagService.update(experiment.featureFlag.id, { + await this.featureFlagService.update( + experiment.featureFlag.id, + { enabled: false, - }) + }, + transactionalEntityManager, + ) } - await this.experimentService.update(id, { - status: ExperimentStatus.COMPLETED, - endedAt: new Date(), - }) + await this.experimentService.update( + id, + { + status: ExperimentStatus.COMPLETED, + endedAt: new Date(), + }, + transactionalEntityManager, + ) + })backend/apps/cloud/src/feature-flag/feature-flag.controller.ts (3)
116-120: Address the type suppression and properly type the result transformation.The
@ts-expect-errordirective suppresses TypeScript checking without explanation. This transformation should be properly typed to maintain type safety.🔎 Suggested approach
Explicitly type the transformation or create a mapped type:
- // @ts-expect-error - result.results = _map(result.results, flag => ({ + const transformedResults = _map(result.results, flag => ({ ..._omit(flag, ['project']), pid: projectId, - })) + })) as Array<Omit<FeatureFlag, 'project'> & { pid: string }> - return result + return { + ...result, + results: transformedResults, + }
337-340: Explicitly check for booleantruein the filter condition.The filter
evaluatedFlags[f.key]will be falsy when the flag evaluates tofalse, which might not be the intended behavior. If you want to include only flags that evaluated totrueand are linked to experiments, this is correct. However, the condition should be explicit for clarity.🔎 Proposed clarification
const flagsWithExperiments = flags.filter( - f => f.experimentId && evaluatedFlags[f.key], + f => f.experimentId && evaluatedFlags[f.key] === true, )
694-694: Extract hardcoded pagination defaults to named constants.The hardcoded values
15and50should be extracted to module-level constants for maintainability, similar toFEATURE_FLAGS_MAXIMUMandFEATURE_FLAGS_PAGINATION_MAX_TAKE.🔎 Proposed refactor
At the top of the file with other constants:
const FEATURE_FLAGS_MAXIMUM = 50 // Maximum feature flags per project const FEATURE_FLAGS_PAGINATION_MAX_TAKE = 100 +const FEATURE_FLAG_PROFILES_DEFAULT_TAKE = 15 +const FEATURE_FLAG_PROFILES_MAX_TAKE = 50Then use in the method:
- const safeTake = Math.min(take || 15, 50) + const safeTake = Math.min(take || FEATURE_FLAG_PROFILES_DEFAULT_TAKE, FEATURE_FLAG_PROFILES_MAX_TAKE)
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
backend/package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (15)
backend/apps/cloud/src/ai/ai.controller.tsbackend/apps/cloud/src/alert/alert.controller.tsbackend/apps/cloud/src/auth/auth.controller.tsbackend/apps/cloud/src/auth/auth.service.tsbackend/apps/cloud/src/common/analytics.tsbackend/apps/cloud/src/experiment/experiment.controller.tsbackend/apps/cloud/src/feature-flag/feature-flag.controller.tsbackend/apps/cloud/src/goal/goal.controller.tsbackend/apps/cloud/src/organisation/organisation.controller.tsbackend/apps/cloud/src/project/gsc.controller.tsbackend/apps/cloud/src/project/project.controller.tsbackend/apps/cloud/src/revenue/revenue.controller.tsbackend/apps/cloud/src/user/user.controller.tsbackend/apps/cloud/src/user/user.service.tsbackend/package.json
🚧 Files skipped from review as they are similar to previous changes (4)
- backend/apps/cloud/src/common/analytics.ts
- backend/apps/cloud/src/ai/ai.controller.ts
- backend/apps/cloud/src/revenue/revenue.controller.ts
- backend/apps/cloud/src/project/gsc.controller.ts
🧰 Additional context used
🧬 Code graph analysis (8)
backend/apps/cloud/src/auth/auth.controller.ts (1)
backend/apps/cloud/src/common/analytics.ts (1)
trackCustom(10-20)
backend/apps/cloud/src/auth/auth.service.ts (1)
backend/apps/cloud/src/common/analytics.ts (1)
trackCustom(10-20)
backend/apps/cloud/src/goal/goal.controller.ts (4)
web/app/api/index.ts (1)
Goal(1177-1187)backend/apps/cloud/src/goal/dto/goal.dto.ts (3)
GoalDto(122-149)CreateGoalDto(34-75)UpdateGoalDto(77-120)backend/apps/cloud/src/common/utils.ts (1)
getIPFromHeaders(316-359)backend/apps/cloud/src/analytics/analytics.service.ts (1)
getLowestPossibleTimeBucket(205-246)
backend/apps/cloud/src/alert/alert.controller.ts (3)
backend/apps/cloud/src/common/utils.ts (1)
getIPFromHeaders(316-359)backend/apps/cloud/src/user/entities/user.entity.ts (1)
ACCOUNT_PLANS(49-155)backend/apps/cloud/src/common/analytics.ts (1)
trackCustom(10-20)
backend/apps/cloud/src/project/project.controller.ts (3)
backend/apps/cloud/src/auth/decorators/auth.decorator.ts (1)
Auth(10-17)backend/apps/cloud/src/common/utils.ts (2)
getIPFromHeaders(316-359)checkRateLimit(146-161)backend/apps/cloud/src/common/analytics.ts (1)
trackCustom(10-20)
backend/apps/cloud/src/experiment/experiment.controller.ts (5)
backend/apps/cloud/src/experiment/dto/experiment.dto.ts (4)
ExperimentDto(223-277)CreateExperimentDto(63-144)UpdateExperimentDto(146-221)VariantResultDto(279-303)backend/apps/cloud/src/common/utils.ts (1)
getIPFromHeaders(316-359)web/app/api/index.ts (6)
ExposureTrigger(1458-1458)FeatureFlagMode(1462-1462)Experiment(1473-1494)ExperimentStatus(1456-1456)MultipleVariantHandling(1460-1460)ExperimentVariant(1464-1471)backend/apps/cloud/src/analytics/analytics.service.ts (1)
getLowestPossibleTimeBucket(205-246)backend/apps/cloud/src/experiment/bayesian.ts (1)
calculateBayesianProbabilities(118-182)
backend/apps/cloud/src/feature-flag/feature-flag.controller.ts (3)
backend/apps/cloud/src/common/utils.ts (1)
getIPFromHeaders(316-359)backend/apps/cloud/src/feature-flag/evaluation.ts (1)
getExperimentVariant(170-197)backend/apps/cloud/src/analytics/analytics.service.ts (1)
getLowestPossibleTimeBucket(205-246)
backend/apps/cloud/src/organisation/organisation.controller.ts (3)
backend/apps/cloud/src/common/utils.ts (1)
getIPFromHeaders(316-359)backend/apps/cloud/src/common/analytics.ts (1)
trackCustom(10-20)backend/apps/cloud/src/common/constants.ts (2)
isDevelopment(190-190)PRODUCTION_ORIGIN(194-194)
🔇 Additional comments (56)
backend/apps/cloud/src/organisation/organisation.controller.ts (6)
128-156: LGTM - Analytics tracking on organisation creation.The IP extraction pattern and
trackCustomusage are consistent with other controllers in the codebase. ThetrackCustomfunction has internal error handling, so failures won't break the creation flow.
193-196: Good security addition preventing owner role assignment via invites.This ensures the owner role can only be established through proper ownership mechanisms, not through the invitation flow.
235-239: Correct fix - action token now associated with the invitee.The action token should be tied to the user who will use it to accept the invitation, not the user who sent the invite. This ensures the invitation acceptance flow works correctly.
241-245: LGTM - Safe origin handling with dev/prod separation.The implementation correctly restricts client-provided origins to development mode only, falling back to
PRODUCTION_ORIGINin production. The type check adds robustness.
263-269: Good improvement - structured error logging with relevant context.The switch to
this.logger.errorwith structured context (orgId,inviteeId) improves observability while the generic client-facing error message prevents information leakage.
425-429: Good improvement - returning the updated organisation.Returning the updated entity after a PATCH operation follows REST best practices and reduces the need for clients to make an additional GET request.
backend/apps/cloud/src/alert/alert.controller.ts (5)
63-63: LGTM: UUID validation enhances input security.Adding
ParseUUIDPipe({ version: '4' })to alert ID parameters prevents malformed identifiers from reaching business logic and improves early error detection.Also applies to: 268-268, 345-345
106-110: LGTM: Pagination safeguards prevent abuse.Clamping
taketo [1, 100] andskipto non-negative values protects against performance degradation from excessive page sizes or invalid offsets.
170-173: LGTM: Alert count query correctly enforces per-account limits.The query
{ project: { id: In(pids) } }properly counts alerts across all user projects using TypeORM's nested relation filtering.
197-235: LGTM: Alert payload construction and validation logic are correct.The explicit
Partial<Alert>initialization improves clarity. The validation correctly enforces thatqueryCustomEventmust be specified whenqueryMetricisCUSTOM_EVENTSandalertOnEveryCustomEventis false, while allowing it to be omitted when triggering on all custom events.
253-260: LGTM: Error handling balances security and observability.Logging the detailed error server-side while returning a generic message to the client prevents information leakage while maintaining debuggability.
backend/apps/cloud/src/goal/goal.controller.ts (7)
1-77: LGTM: Imports and utility functions are well-structured.The imports, constants, and pagination clamping utility are appropriate and follow best practices.
90-117: LGTM: Goal retrieval endpoint is correctly implemented.Proper authorization, error handling, and response formatting.
161-218: LGTM: Goal creation logic with proper validation.The endpoint correctly validates user activation status, project existence, permissions, subscription status, and billing status before creating a goal.
251-329: LGTM: Update and delete endpoints are well-implemented.Both endpoints correctly validate permissions, handle errors, and include appropriate tracking for analytics.
366-392: LGTM: Metadata condition builder is secure and correct.The method properly uses parameterized queries to prevent SQL injection and correctly handles empty filter cases.
563-600: LGTM: Date string generation is correct.The
generateDateStringmethod correctly formats dates with proper zero-padding for each time component.
662-726: LGTM: Chart data generation is well-implemented.The endpoint correctly generates time-series chart data with proper bucketing, handles timezone conversions, and fills missing data points with zeros.
backend/apps/cloud/src/auth/auth.controller.ts (3)
59-59: LGTM! Analytics tracking integrated correctly.The signup analytics tracking is properly integrated and matches the pattern used for SSO signups elsewhere in the codebase.
Also applies to: 124-129
235-238: LGTM! Security improvement to prevent account enumeration.The conditional email sending is a best practice that prevents attackers from discovering which email addresses are registered.
490-492: LGTM! Simplified SSO authentication signature.The removal of referrer code logic simplifies the SSO flow. The method signature is clean and consistent.
backend/apps/cloud/src/auth/auth.service.ts (6)
63-66: LGTM! Well-defined TTL constants.The TTL values are reasonable and follow security best practices, especially the 1-hour expiration for password reset tokens.
109-140: LGTM! Robust password leak checking with proper error handling.The enhancements follow HIBP best practices (Add-Padding header) and implement appropriate fail-safe behavior by not blocking signups on API failures.
673-678: LGTM! Proper validation of Google session data.The validation ensures session data integrity before proceeding with authentication, which is a good security practice.
686-695: LGTM! Signup analytics properly integrated for Google SSO.The analytics tracking is consistent with the email signup flow and correctly captures the signup method.
1103-1108: LGTM! Thorough validation of Github session data.The validation properly checks that the Github ID is a valid finite number, which is more robust than just checking the type.
1116-1125: LGTM! Signup analytics properly integrated for Github SSO.The analytics tracking is consistent with the Google and email signup flows, completing the instrumentation for all signup methods.
backend/apps/cloud/src/experiment/experiment.controller.ts (8)
1-88: LGTM! Clean dependency injection and well-defined constants.The controller properly injects all necessary services and defines clear constants for business rules (experiment limits, feature flag key validation).
90-171: LGTM! Proper pagination and access control.Both endpoints correctly validate project access and transform entities to DTOs. The pagination bounds (1-200 for take) are reasonable.
173-372: LGTM! Comprehensive validation and transactional safety.The endpoint properly validates all business rules (variant count, control presence, rollout percentages, subscription limits) and wraps the experiment creation and feature flag update in a transaction to ensure atomicity.
374-561: LGTM! State validation and transactional updates.The endpoint correctly prevents updates to running or completed experiments and wraps all database modifications (variant recreation, feature flag mode changes) in a transaction to maintain consistency.
563-608: LGTM! Proper cleanup logic for feature flag modes.The endpoint correctly handles feature flag cleanup based on whether the flag was created for the experiment or linked from an existing one.
610-731: LGTM! Proper transactional safety for experiment start.The endpoint uses a transaction to atomically create/update the feature flag and start the experiment, ensuring consistent state even if operations fail midway.
847-1105: LGTM! Robust results calculation with proper error handling.The endpoint correctly calculates experiment metrics using Bayesian statistics. The guard at line 1067 prevents the reduce error on empty arrays (addressing previous review feedback), and failed ClickHouse queries are logged as warnings without failing the request.
1110-1403: LGTM! Sophisticated chart generation with cumulative counting.The chart generation correctly uses first-exposure and first-conversion bucketing to avoid double-counting profiles across time periods, ensuring the final chart point aligns with the summary table results. The helper methods properly format SQL date expressions and row data for different time buckets.
backend/apps/cloud/src/feature-flag/feature-flag.controller.ts (5)
351-370: Excellent: Stable variant ordering for deterministic selection.The explicit sorting of variants by key (lines 354-356) ensures deterministic variant assignment across requests for the same profileId, regardless of database query order. This is critical for consistent experiment behavior and prevents users from being reassigned different variants.
281-285: Good security practice: Prevents project enumeration.Returning empty flags instead of an error for non-existent or inactive projects prevents attackers from enumerating valid project IDs. Combined with rate limiting (line 275), this provides solid protection for the public endpoint.
1-784: Overall assessment: Comprehensive feature flag implementation with minor improvements needed.This controller provides a well-structured feature flag system with:
- ✅ Proper authorization and validation flows
- ✅ Good security practices (rate limiting, enumeration prevention)
- ✅ Comprehensive error handling (with noted exceptions)
- ✅ Analytics and experiment integration
- ✅ Deterministic variant selection
Main areas for improvement:
- Inconsistent error handling for async analytics calls
- Type suppression that should be properly resolved
- Some hardcoded values that should be constants
- Verification needed for datetime format and SQL parameter validation
698-718: Verify DTO validation for theresultparameter to prevent SQL injection.The
resultHavingClauseis constructed from theresultquery parameter and directly interpolated into SQL queries. While TypeScript constrains it to'true' | 'false', ensure the DTO includes runtime validation (e.g.,@IsEnumor@IsIn(['true', 'false'])) to prevent SQL injection if the type constraint is bypassed. Check the DTO definition for the query parameter to confirm runtime validation is in place.
407-407: Verify datetime format compatibility with ClickHouse schema.The manual datetime formatting (
toISOString().slice(0, 19).replace('T', ' ')) creates a MySQL-style datetime string (YYYY-MM-DD HH:MM:SS). Confirm this format aligns with the datetime column types in thefeature_flag_evaluationsandexperiment_exposuresClickHouse tables, or use ClickHouse's native timestamp functions if available.backend/apps/cloud/src/project/project.controller.ts (16)
117-117: LGTM: Analytics import added.The
trackCustomimport is correctly added to support the new analytics tracking throughout the controller.
188-198: LGTM: Listing endpoint simplified.The endpoint flow is cleaner after removing mode-based routing logic. The simplified call to
paginateandprocessDefaultResultsimproves maintainability.
228-260: LGTM: Pin endpoint implementation is solid.The pin endpoint correctly validates the project ID, checks view permissions, and tracks analytics. The use of
allowedToView(rather thanallowedToManage) is appropriate since pinning is a user-specific preference for projects they can access.
287-396: LGTM: Project creation analytics integrated correctly.The endpoint now properly extracts IP from headers and tracks the
PROJECT_CREATEDevent. The implementation follows the established pattern used throughout the controller.
399-457: LGTM: Funnel creation tracking includes useful metadata.The analytics tracking correctly captures the number of steps in the funnel, which provides valuable telemetry for understanding feature usage.
530-570: LGTM: Funnel deletion analytics symmetrically tracks metadata.Good practice to capture
steps_quantityfrom the funnel before deletion, maintaining consistency with the creation event tracking.
597-642: LGTM: Annotation creation analytics captures text length.The metadata tracking of annotation text length provides useful insights into how users interact with the annotation feature.
981-986: LGTM: JSON parsing now has proper error handling.Wrapping
JSON.parsein a try-catch prevents unhandled exceptions and provides users with a clear error message when invalid filter data is provided.
1077-1081: LGTM: Action token correctly associated with invitee.Changing from
usertoinviteeensures the action token is properly associated with the user being invited, which is semantically correct for share invitations.
1170-1200: LGTM: Critical security improvement for share acceptance.The addition of the authorization check (lines 1192-1194) ensures that only the intended invitee can accept a share invitation. This prevents potential misuse where someone with a valid token ID could accept shares on behalf of others.
1540-1610: LGTM: Project deletion properly tracks analytics and cleans up shares.The deletion flow correctly cleans up ClickHouse data and shares before deleting the project entity, then tracks the deletion event. The explicit share cleanup (line 1598) improves code clarity.
1815-1817: LGTM: captchaDifficulty handling is correct.The use of
!== undefinedallows the value to be explicitly set to any valid number, including 0. The past review comment about adding@Min()validation applies to the DTO file, not this controller logic.
1835-1839: LGTM: websiteUrl handling supports both setting and clearing.The logic correctly allows users to set a URL (with trimming) or explicitly clear it by passing a falsy value, which is then normalized to
null.
1925-1934: LGTM: Data existence checks refactored efficiently.The refactoring improves code quality through:
- Cleaner destructuring assignment
- Parallel execution via
Promise.allfor better performance- Addition of
isCaptchaDataExiststo provide complete data status
2002-2052: LGTM: Project view creation tracks comprehensive metadata.The analytics event captures the view type and boolean indicators for filters and custom events, providing valuable telemetry while avoiding exposure of potentially sensitive filter details.
2121-2163: LGTM: Project view deletion tracking is straightforward.The deletion event tracking follows the established pattern. The absence of detailed metadata is appropriate for deletion events.
Related PRs:
Summary by CodeRabbit
New Features
Documentation
Improvements
Chores
✏️ Tip: You can customize this high-level summary in your review settings.