Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .tool-versions
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
python 3.13.1
poetry 1.8.3
ruby 3.1.1
nodejs 20.9.0
nodejs 22.12.0
307 changes: 306 additions & 1 deletion CLAUDE.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,24 @@
# Claude Development Guidelines

## Core Philosophy

TEST-DRIVEN DEVELOPMENT IS NON-NEGOTIABLE. Every single line of production code
must be written in response to a failing test. No exceptions. This is not a
suggestion or a preference - it is the fundamental practice that enables all
other principles in this document.

I follow Test-Driven Development (TDD) with a strong emphasis on
behaviour-driven testing and object-oriented principles. All work should be
done in small, incremental changes that maintain a working state throughout
development.

## Development Approach

### General Principles

- Follow Test-Driven Development (TDD) strictly, using the red-green-refactor
cycle, i.e.:
- Write a failing test first
- Write a failing test first, testing behaviour, not implementation
- Implement the minimum code to make the test pass
- Refactor the code to improve quality while ensuring all tests still pass
- Follow SOLID principles:
Expand Down Expand Up @@ -36,6 +48,108 @@
7. Law of Demeter: A unit should only talk to its immediate friends; don't
talk to strangers.

### Development Workflow

#### TDD Process - THE FUNDAMENTAL PRACTICE
CRITICAL: TDD is not optional. Every feature, every bug fix, every change MUST
follow this process:

Follow Red-Green-Refactor strictly:

1. Red: Write a failing test for the desired behavior. NO PRODUCTION CODE until
you have a failing test.
2. Green: Write the MINIMUM code to make the test pass. Resist the urge to write
more than needed.
3. Refactor: Assess the code for improvement opportunities. If refactoring would
add value, clean up the code while keeping tests green. If the code is
already clean and expressive, move on.

Common TDD Violations to Avoid:

- Writing production code without a failing test first
- Writing multiple tests before making the first one pass
- Writing more production code than needed to pass the current test
- Skipping the refactor assessment step when code could be improved
- Adding functionality "while you're there" without a test driving it

- Remember: If you're typing production code and there isn't a failing test
demanding that code, you're not doing TDD.

#### Refactoring - The Critical Third Step

Evaluating refactoring opportunities is not optional - it's the third step in
the TDD cycle. After achieving a green state and committing your work, you MUST
assess whether the code can be improved. However, only refactor if there's
clear value - if the code is already clean and expresses intent well, move on
to the next test.

##### What is Refactoring?
Refactoring means changing the internal structure of code without changing its
external behavior. The public API remains unchanged, all tests continue to pass,
but the code becomes cleaner, more maintainable, or more efficient. Remember:
only refactor when it genuinely improves the code - not all code needs
refactoring.

##### When to Refactor

- Always assess after green: Once tests pass, before moving to the next test,
evaluate if refactoring would add value
- When you see duplication: But understand what duplication really means
(see DRY below)
- When names could be clearer: Variable names, function names, or type names
that don't clearly express intent
- When structure could be simpler: Complex conditional logic, deeply nested
code, or long functions
- When patterns emerge: After implementing several similar features, useful
abstractions may become apparent

Remember: Not all code needs refactoring. If the code is already clean,
expressive, and well-structured, commit and move on. Refactoring should improve
the code - don't change things just for the sake of change.

##### Refactoring Guidelines

1. Commit Before Refactoring

Always commit your working code before starting any refactoring. This gives you
a safe point to return to.

2. Look for Useful Abstractions Based on Semantic Meaning

Create abstractions only when code shares the same semantic meaning and purpose.
Don't abstract based on structural similarity alone - duplicate code is far
cheaper than the wrong abstraction.

Questions to ask before abstracting:

- Do these code blocks represent the same concept or different concepts that
happen to look similar?
- If the business rules for one change, should the others change too?
- Would a developer reading this abstraction understand why these things are
grouped together?
- Am I abstracting based on what the code IS (structure) or what it MEANS
(semantics)?

Remember: It's much easier to create an abstraction later when the semantic
relationship becomes clear than to undo a bad abstraction that couples
unrelated concepts.

3. Understanding DRY - It's About Knowledge, Not Code

DRY (Don't Repeat Yourself) is about not duplicating knowledge in the system,
not about eliminating all code that looks similar.

4. Maintain External APIs During Refactoring

Refactoring must never break existing consumers of your code.

5. Verify and Commit After Refactoring

CRITICAL: After every refactoring:
- Run all tests - they must pass without modification
- Run static analysis (linting, type checking) - must pass
- Commit the refactoring separately from feature changes

## Clean Code Principles

### Comments and Documentation
Expand All @@ -46,6 +160,23 @@
comments
- Follow a Clean Code approach and let the code speak for itself

### Code Structure

- No nested if/else statements - use early returns, guard clauses, or
composition
- Avoid deep nesting in general (max 2 levels)
- Keep functions and methods small and focused on a single responsibility
- Prefer flat, readable code over clever abstractions

### Naming Conventions

- Functions: snake_case, verb-based (e.g., calculate_total, validate_payment)
- Types: PascalCase (e.g., PaymentRequest, UserProfile)
- Constants: UPPER_SNAKE_CASE for true constants, camelCase for configuration
- Files: snake_case.py for all Python files, but prefer single word module and
package names
- Test files: test_*.py

### Python Version and Modern Features

- Target Python 3.13 or greater
Expand All @@ -67,6 +198,19 @@

## Testing

### Testing Principles

#### Behaviour-Driven Testing

- Tests should verify expected behaviour, treating implementation as a black box
- Test through the public API exclusively - internals should be invisible to
tests
- Tests that examine internal implementation details are wasteful and should be
avoided
- Coverage targets: 100% coverage should be expected at all times, but these
tests must ALWAYS be based on business behaviour, not implementation details
- Tests must document expected business behaviour

### Testing Strategy

- Follow Test-Driven Development (TDD) strictly
Expand Down Expand Up @@ -105,3 +249,164 @@
- Build complete expected results and assert equality rather than multiple
smaller assertions
- Use clear, descriptive test names that indicate the operation type

## Working with git

- Keep commit messages to 8 lines or less
- Don't include any references to Claude
- Don't include Claude as a co-author

## Working with Claude

### Expectations

When working with my code:

1. ALWAYS FOLLOW TDD - No production code without a failing test. This is not
negotiable.
2. Think deeply before making any edits
3. Understand the full context of the code and requirements
4. Ask clarifying questions when requirements are ambiguous
5. Think from first principles - don't make assumptions
6. Assess refactoring after every green - Look for opportunities to improve
code structure, but only refactor if it adds value
7. Keep project docs current - update them whenever you introduce meaningful
changes

### Code Changes

When suggesting or making changes:

- Start with a failing test - always. No exceptions.
- After making tests pass, always assess refactoring opportunities (but only
refactor if it adds value)
- After refactoring, verify all tests and static analysis pass, then commit
- Respect the existing patterns and conventions
- Maintain test coverage for all behaviour changes
- Keep changes small and incremental
- Provide rationale for significant design decisions

If you find yourself writing production code without a failing test, STOP
immediately and write the test first.

### Communication

- Be explicit about trade-offs in different approaches
- Explain the reasoning behind significant design decisions
- Flag any deviations from these guidelines with justification
- Suggest improvements that align with these principles
- When unsure, ask for clarification rather than assuming

## Using Gemini CLI for Large Codebase Analysis

When analysing large codebases or multiple files that might exceed context
limits, use the Gemini CLI with its massive context window. Use `gemini -p` to
leverage Google Gemini's large context capacity.

### File and Directory Inclusion Syntax

Use the `@` syntax to include files and directories in your Gemini prompts. The
paths should be relative to WHERE you run the gemini command.

#### Research Examples:

**Single file analysis:**
```shell
gemini -p "@src/main.py Explain this file's purpose and structure"
```

Multiple files:
```shell
gemini -p "@package.json @src/index.js Analyse the dependencies used in the code"
```

Entire directory:
```shell
gemini -p "@src/ Summarise the architecture of this codebase"
```

Multiple directories:
```shell
gemini -p "@src/ @tests/ Analyse test coverage for the source code"
```

Current directory and subdirectories:
```shell
gemini -p "@./ Give me an overview of this entire project"

# Or use --all_files flag:
gemini --all_files -p "Analyse the project structure and dependencies"
```

#### Implementation Verification Examples

Check if a feature is implemented:
```shell
gemini -p "@src/ @lib/ Has dark mode been implemented in this codebase? Show me the relevant files and functions"
```

Verify authentication implementation:
```shell
gemini -p "@src/ @middleware/ Is JWT authentication implemented? List all auth-related endpoints and middleware"
```

Check for specific patterns:
```shell
gemini -p "@src/ Are there any React hooks that handle WebSocket connections? List them with file paths"
```

Verify error handling:
```shell
gemini -p "@src/ @api/ Is proper error handling implemented for all API endpoints? Show examples of try-catch blocks"
```

Check for rate limiting:
```shell
gemini -p "@backend/ @middleware/ Is rate limiting implemented for the API? Show the implementation details"
```

Verify caching strategy:
```shell
gemini -p "@src/ @lib/ @services/ Is Redis caching implemented? List all cache-related functions and their usage"
```

Check for specific security measures:
```shell
gemini -p "@src/ @api/ Are SQL injection protections implemented? Show how user inputs are sanitised"
```

Verify test coverage for features:
```shell
gemini -p "@src/payment/ @tests/ Is the payment processing module fully tested? List all test cases"
```

### When to Use Gemini CLI

Use `gemini -p` when:
- Analysing entire codebases or large directories
- Comparing multiple large files
- You need to understand project-wide patterns or architecture
- The current context window is insufficient for the task
- You are working with files totalling more than 100KB
- You are verifying if specific features, patterns, or security measures are
implemented
- Checking for the presence of certain coding patterns across the entire
codebase

### Important Notes

- Paths in @ syntax are relative to your current working directory when invoking
gemini
- The CLI will include file contents directly in the context
- No need for --yolo flag for read-only analysis
- Gemini's context window can handle entire codebases that would overflow
Claude's context
- When checking implementations, be specific about what you're looking for to
get accurate results

## Summary
The key is to write clean, testable, functional code that evolves through small,
safe increments. Every change should be driven by a test that describes the
desired behaviour, and the implementation should be the simplest thing that
makes that test pass. When in doubt, favor simplicity and readability over
cleverness.
3 changes: 2 additions & 1 deletion src/logicblocks/event/persistence/postgres/query.py
Original file line number Diff line number Diff line change
Expand Up @@ -181,10 +181,11 @@ class Operator(StrEnum):
IN = "IN"
CONTAINS = "@>"
REGEX_MATCHES = "~"
LIKE = "LIKE"

@property
def comparison_type(self) -> ComparisonType:
if self == Operator.REGEX_MATCHES:
if self == Operator.REGEX_MATCHES or self == Operator.LIKE:
return ComparisonType.TEXT
return ComparisonType.JSONB

Expand Down
4 changes: 4 additions & 0 deletions src/logicblocks/event/processing/broker/factories.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@
from psycopg_pool import AsyncConnectionPool

from logicblocks.event.persistence.postgres import ConnectionSettings
from logicblocks.event.sources import EventSourcePartitioner
from logicblocks.event.store import (
EventStorageAdapter,
)
Expand Down Expand Up @@ -60,13 +61,15 @@ class _PostgresEventBrokerStorageType(EventBrokerStorageType): ...
class InMemoryDistributedBrokerParams(TypedDict):
settings: DistributedEventBrokerSettings
adapter: EventStorageAdapter
partitioner: NotRequired[EventSourcePartitioner | None]


class PostgresDistributedBrokerParams(TypedDict):
connection_settings: ConnectionSettings
connection_pool: AsyncConnectionPool[AsyncConnection]
settings: DistributedEventBrokerSettings
adapter: NotRequired[EventStorageAdapter | None]
partitioner: NotRequired[EventSourcePartitioner | None]


class InMemorySingletonBrokerParams(TypedDict):
Expand All @@ -86,6 +89,7 @@ class CombinedBrokerParams(TypedDict, total=False):
connection_settings: ConnectionSettings
connection_pool: AsyncConnectionPool[AsyncConnection]
adapter: NotRequired[EventStorageAdapter | None]
partitioner: NotRequired[EventSourcePartitioner | None]


@overload
Expand Down
Loading