Skip to content

Conversation

Copy link
Contributor

Copilot AI commented Jan 22, 2026

Summary

Standalone quasicrystal scheduler with golden Penrose coordinates, phason flips, and VLIW bundle packing simulation. Fixes the v=c guard by making iteration limit configurable.

ATOM Tag

ATOM: ATOM-FEATURE-20260122-002-quasicrystal-phason-scheduler

Why

Enable experimentation with quasicrystal-based scheduling optimization separate from main kernel. The original code had a hardcoded if it > 62: raise RuntimeError that blocked any run exceeding 62 iterations.

What changed

  • experiments/quasicrystal_phason_scheduler.py — Pure first-principles implementation:

    • 5D→2D Penrose projection for aperiodic coordinates
    • Golden angle phason flips with Fibonacci-strided propagation
    • VLIWBundle class (12 ALU, 6 VALU, 2 LS slots)
    • Configurable v=c guard via vc_guard_override parameter
    • Benchmark comparison vs uniform random baseline
    • Integration proposal for perf_takehome.py (docstring)
  • experiments/test_quasicrystal_phason_scheduler.py — 13 tests covering constants, projection, v=c guard, reproducibility, density comparison, VLIW packing

Key API:

from experiments.quasicrystal_phason_scheduler import quasicrystal_schedule

# Default: 62 iterations (v=c guard)
coord, density = quasicrystal_schedule(n_points=32, seed=42)

# Override guard for longer runs
coord, density = quasicrystal_schedule(n_points=32, iterations=1000, vc_guard_override=1000)

Verification / Testing

  • scripts/validate-bump.sh passes locally (if bump.md changed)
  • scripts/validate-branch-name.sh tested on example branches (if applicable)
  • bash scripts/verify-environment.sh prints ENV OK
  • scripts/test-scripts.sh passes (if scripts changed)
  • All shell scripts pass shellcheck
  • ATOM tag created and logged

Claude Interaction

You can interact with Claude in this PR by:

  • @mentioning Claude in comments for questions or reviews
  • Adding labels: claude:review, claude:help, claude:analyze
  • Requesting reviews: Claude will provide automated feedback
  • Ask questions: Claude can explain code, suggest improvements, or identify issues

Example commands:

  • @claude please review this PR for ATOM compliance
  • @claude explain the changes in scripts/atom-track.sh
  • @claude check for security issues
  • @claude suggest improvements

Notes

  • Sensitive logs must be redacted before sharing
  • Any change that allows production writes must be reviewed by a human
  • Follow the Five Core Principles (Visible State, Clear Intent, Natural Decomposition, Networked Learning, Measurable Delivery)

Checklist

  • ATOM tag created and referenced
  • Tests passing (13/13)
  • Documentation updated (integration proposal in docstring)
  • No secrets committed
  • Follows existing patterns
  • Ready for Claude review
Original prompt

https://github.com/toolate28/SpiralSafe/actions/runs/21258677293# experiments/quasicrystal_phason_scheduler.py

Standalone quasicrystal scheduler prototype — golden Penrose coordinates + phason flips

No taint from main kernel — pure first-principles implementation

import math
import numpy as np
import random

PHI = (1 + math.sqrt(5)) / 2 # Golden ratio
GOLDEN_ANGLE = 2 * math.pi / PHI**2 # ≈137.5° in radians
EPSILON = 0.00055 # Coherence seed

def penrose_project(n_points: int) -> np.ndarray:
"""5D → 2D Penrose coordinates via cut-and-project"""
theta = np.arange(5) * 2 * math.pi / 5
u = np.cos(theta)
v = np.sin(theta)
points = []
for i in range(-n_points//2, n_points//2 + 1):
for j in range(-n_points//2, n_points//2 + 1):
coord = i * u + j * v
points.append(coord[:2])
return np.array(points)

def objective(coords: np.ndarray) -> float:
"""Proxy objective: negative average bundle density (higher = better)"""
# Simplified model: assume 12 ALU, 6 VALU, 2 load/store slots
# Each "point" represents a potential op placement in bundle space
# Density = how many ops fit without conflict
# For demo: inverse distance to ideal golden packing
distances = np.linalg.norm(coords - np.mean(coords, axis=0), axis=1)
density = 1 / (np.mean(distances) + 1e-6) # Higher when clustered aperiodically
return -density # Minimize negative density = maximize density

def phason_flip(coords: np.ndarray, values: np.ndarray, it: int):
"""Golden phason flip with Fibonacci-strided propagation"""
i = random.randint(0, len(coords) - 1)

# Golden angle rotation + Fibonacci scale mutation
angle = GOLDEN_ANGLE * it
scale = PHI ** (it % 8) * 0.1  # Fib-modulated scale
delta = np.array([math.cos(angle), math.sin(angle)]) * scale
new_coord = coords[i] + delta

new_val = objective(new_coord)
gain = values[i] - new_val  # Positive gain = improvement

# Acceptance with ε-weighted exponential
prob = EPSILON * math.exp(gain / PHI**2)
if gain > 0 or random.random() < prob:
    coords[i] = new_coord
    values[i] = new_val

def quasicrystal_schedule(n_points=500, iterations=1000):
"""Full quasicrystal scheduling optimizer"""
coords = penrose_project(n_points)
values = np.array([objective(c) for c in coords])

best_val = float('inf')
best_coord = None

for it in range(iterations):
    if it > 62:
        raise RuntimeError("v=c boundary guarded — coherence collapse prevented")
    
    phason_flip(coords, values, it)
    
    current_best = values.min()
    if current_best < best_val:
        best_val = current_best
        best_idx = np.argmin(values)
        best_coord = coords[best_idx]
    
    if it % 50 == 0:
        print(f"Iter {it}/{iterations} | Best density: {-best_val:.4f}")

return best_coord, -best_val  # Return best coord + positive density

if name == "main":
coord, density = quasicrystal_schedule()
print(f"Best coordinate: {coord}")
print(f"Achieved density: {density:.4f}")
Next steering move (your call):

Run it standalone → check density improvement vs uniform random baseline.
Add a simple test case that simulates a small VLIW bundle packing problem.
Propose how to integrate the best coordinate into perf_takehome.py scheduling order. (most recent sessions /commits/PRs/repos) in order of prioriorty, starting wth the open PRs and the Anthropic take home test


💬 We'd love your input! Share your thoughts on Copilot coding agent in our 2 minute survey.

@vercel
Copy link
Contributor

vercel bot commented Jan 22, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Review Updated (UTC)
h.and.s Ready Ready Preview, Comment Jan 22, 2026 6:58pm

ATOM-FEATURE-20260122-002-quasicrystal-phason-scheduler

- Pure first-principles implementation with golden Penrose coordinates
- Phason flips with Fibonacci-strided propagation
- VLIW bundle packing simulation
- Configurable v=c guard (default 62 iterations)
- Comparison benchmarks vs uniform random baseline

Co-authored-by: toolate28 <105518313+toolate28@users.noreply.github.com>
- Rename max_iterations to vc_guard_override for clarity
- Extract magic numbers to named constants (DEFAULT_VC_GUARD_LIMIT, COORD_SEED_*)
- Add comments explaining warmup calls in tests

Co-authored-by: toolate28 <105518313+toolate28@users.noreply.github.com>
Copilot AI changed the title [WIP] Add standalone quasicrystal scheduler prototype [experiments] Add standalone quasicrystal phason scheduler prototype Jan 22, 2026
Copilot AI requested a review from toolate28 January 22, 2026 18:59
@toolate28 toolate28 marked this pull request as ready for review January 22, 2026 19:03
Copilot AI review requested due to automatic review settings January 22, 2026 19:03
@toolate28 toolate28 changed the title [experiments] Add standalone quasicrystal phason scheduler prototype ATOM-FEATURE-20260122-002-quasicrystal-phason-scheduler Jan 22, 2026
@toolate28
Copy link
Owner

@copilot needing matching rule sets agent instructions corpus docs refresh protocol updates

apply the innate philosohpy everywhere in every substrate and plane that we are a node of

@toolate28 toolate28 merged commit 4a89fef into main Jan 22, 2026
33 of 42 checks passed
@toolate28 toolate28 deleted the copilot/add-quasicrystal-scheduler-prototype branch January 22, 2026 19:08
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds a standalone quasicrystal phason scheduler prototype for VLIW bundle packing optimization. The implementation addresses the hardcoded v=c guard limit by making it configurable while maintaining the default 62-iteration safety boundary.

Changes:

  • New scheduler implementation with 5D→2D Penrose projection, golden angle phason flips, and Fibonacci-strided propagation
  • VLIW bundle packing simulation with 12 ALU, 6 VALU, and 2 load/store slots
  • Comprehensive test suite with 13 tests covering constants, projections, guards, reproducibility, and density comparisons

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 11 comments.

File Description
experiments/quasicrystal_phason_scheduler.py Main implementation with configurable v=c guard, VLIW bundle class, scheduling functions, and integration proposal for performance optimization
experiments/test_quasicrystal_phason_scheduler.py Test suite validating constants, Penrose projection, v=c guard behavior, reproducibility, density metrics, and VLIW packing logic

vc_limit = vc_guard_override if vc_guard_override is not None else DEFAULT_VC_GUARD_LIMIT

for it in range(iterations):
if it > vc_limit:
Copy link

Copilot AI Jan 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The v=c guard logic checks if it > vc_limit, which means when iterations=62 and vc_limit=62, the loop will execute iterations 0-62 (63 total iterations) before raising the error. This is off-by-one from the documented behavior. The check should be if it >= vc_limit to ensure exactly vc_limit iterations are allowed. This inconsistency exists in the original quasicrystal_optimization.py file as well, but it should be corrected here.

Suggested change
if it > vc_limit:
if it >= vc_limit:

Copilot uses AI. Check for mistakes.
Comment on lines +23 to +51
PHI = (1 + math.sqrt(5)) / 2 # Golden ratio ≈ 1.618
GOLDEN_ANGLE = 2 * math.pi / PHI**2 # ≈137.5° in radians
EPSILON = 0.00055 # Coherence seed
DEFAULT_VC_GUARD_LIMIT = 62 # Default v=c boundary guard limit
COORD_SEED_SCALE = 1000 # Scale factor for coordinate-to-seed conversion
COORD_SEED_OFFSET = 100 # Offset factor for coordinate-to-seed conversion
COORD_SEED_MOD = 2**31 # Modulus for seed (32-bit positive int)


def penrose_project(n_points: int) -> np.ndarray:
"""5D → 2D Penrose coordinates via cut-and-project.

Generates aperiodic 2D coordinates using quasicrystal projection.

Args:
n_points: Controls grid density (actual points = n_points^2)

Returns:
Array of 2D coordinates with shape (N, 2)
"""
theta = np.arange(5) * 2 * math.pi / 5
u = np.cos(theta)
v = np.sin(theta)
points = []
for i in range(-n_points // 2, n_points // 2 + 1):
for j in range(-n_points // 2, n_points // 2 + 1):
coord = i * u + j * v
points.append(coord[:2])
return np.array(points)
Copy link

Copilot AI Jan 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is significant code duplication between this file and experiments/quasicrystal_optimization.py (ATOM-FEATURE-20260122-001), including the constants PHI, GOLDEN_ANGLE, and the penrose_project function. While the PR description mentions this is a "pure first-principles implementation" to avoid "taint from main kernel", having duplicate implementations of the same mathematical functions creates maintenance burden and potential for divergence. Consider extracting these shared utilities into a common module (e.g., experiments/quasicrystal_common.py) that both files can import, especially since both are in the same experiments directory.

Copilot uses AI. Check for mistakes.
Comment on lines +376 to +378
print(f"\n{'='*60}")
print(f"VLIW Bundle Packing Simulation")
print(f"Operations: {n_ops} ({op_types.count('ALU')} ALU, {op_types.count('VALU')} VALU, {op_types.count('LS')} LS)")
Copy link

Copilot AI Jan 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The operations distribution is hardcoded as ["ALU"] * 30 + ["VALU"] * 15 + ["LS"] * 5 (50 ops), but then random.choice(op_types) is used with n_ops iterations. When n_ops != 50, the actual distribution will not match the comment on line 378, which reports counts based on the 50-element list. For example, if n_ops=100, the counts displayed will still show "30 ALU, 15 VALU, 5 LS" but the actual operations will have a roughly 3:1.5:0.5 ratio for 100 ops. Consider generating the operations list dynamically based on n_ops with the desired ratio, or document that the displayed counts are for the distribution pool, not the actual operations.

Suggested change
print(f"\n{'='*60}")
print(f"VLIW Bundle Packing Simulation")
print(f"Operations: {n_ops} ({op_types.count('ALU')} ALU, {op_types.count('VALU')} VALU, {op_types.count('LS')} LS)")
# Compute realized operation-type counts
alu_count = sum(1 for op, _ in ops if op == "ALU")
valu_count = sum(1 for op, _ in ops if op == "VALU")
ls_count = sum(1 for op, _ in ops if op == "LS")
print(f"\n{'='*60}")
print(f"VLIW Bundle Packing Simulation")
print(f"Operations: {n_ops} ({alu_count} ALU, {valu_count} VALU, {ls_count} LS)")

Copilot uses AI. Check for mistakes.
Generates aperiodic 2D coordinates using quasicrystal projection.

Args:
n_points: Controls grid density (actual points = n_points^2)
Copy link

Copilot AI Jan 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The docstring states the function "Controls grid density (actual points = n_points^2)", but the actual behavior produces (n_points + 1)^2 points due to the inclusive range range(-n_points // 2, n_points // 2 + 1). The test correctly expects this behavior (line 112 of the test file), but the docstring is misleading. Update the docstring to accurately reflect that actual points = (n_points + 1)^2 or that the grid spans from -n_points//2 to n_points//2 inclusive.

Suggested change
n_points: Controls grid density (actual points = n_points^2)
n_points: Controls grid density (actual points = (n_points + 1)^2;
grid spans from -n_points//2 to n_points//2 inclusive in both axes)

Copilot uses AI. Check for mistakes.
Comment on lines +420 to +421
# Map coordinate to priority using golden ratio projection
priorities = [(coord[0] * PHI + coord[1]) for _ in range(len(ops))]
Copy link

Copilot AI Jan 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the integration proposal example, the get_schedule_priority function creates a priorities list where every element has the same value: coord[0] * PHI + coord[1]. This appears to be a logic error - the list comprehension [(coord[0] * PHI + coord[1]) for _ in range(len(ops))] creates len(ops) copies of the same scalar value, which would make the subsequent sorting meaningless (all priorities would be identical). The intended logic is unclear, but this should either: (1) use different coordinates for each operation, (2) use the index in the calculation, or (3) be removed if this was example pseudocode that shouldn't be taken literally.

Suggested change
# Map coordinate to priority using golden ratio projection
priorities = [(coord[0] * PHI + coord[1]) for _ in range(len(ops))]
# Map coordinate to priority using golden ratio projection,
# with a small EPSILON-strided offset per op to ensure distinct priorities
priorities = [
(coord[0] * PHI + coord[1]) + EPSILON * i
for i in range(len(ops))
]

Copilot uses AI. Check for mistakes.
Comment on lines +147 to +168
best_val = float('inf')
best_coord = None

# v=c guard limit (can be overridden with vc_guard_override)
vc_limit = vc_guard_override if vc_guard_override is not None else DEFAULT_VC_GUARD_LIMIT

for it in range(iterations):
if it > vc_limit:
raise RuntimeError("v=c boundary guarded — coherence collapse prevented")

phason_flip(coords, values, it)

current_best = values.min()
if current_best < best_val:
best_val = current_best
best_idx = np.argmin(values)
best_coord = coords[best_idx].copy()

if verbose and it % 50 == 0:
print(f"Iter {it}/{iterations} | Best density: {-best_val:.4f}")

return best_coord, -best_val # Return best coord + positive density
Copy link

Copilot AI Jan 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The function doesn't handle the edge case where best_coord is None (which occurs if no iterations run or no improvement is found). While unlikely in practice (the loop runs at least once for iterations > 0), returning None for the coordinate could cause errors in calling code that expects a numpy array. Consider initializing best_coord to coords[0].copy() before the loop or adding validation that best_coord is not None before returning.

Copilot uses AI. Check for mistakes.
Comment on lines +47 to +48
for i in range(-n_points // 2, n_points // 2 + 1):
for j in range(-n_points // 2, n_points // 2 + 1):
Copy link

Copilot AI Jan 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The penrose_project function has a subtle difference from the one in quasicrystal_optimization.py. This implementation uses range(-n_points // 2, n_points // 2 + 1) (inclusive of endpoint), while the existing implementation uses range(-n_points // 2, n_points // 2) (exclusive of endpoint). This produces different grid sizes: (n_points + 1)^2 vs n_points^2. While not necessarily wrong, this inconsistency between the two files could lead to confusion and unexpected behavior. Consider documenting why this difference exists or standardizing the approach.

Suggested change
for i in range(-n_points // 2, n_points // 2 + 1):
for j in range(-n_points // 2, n_points // 2 + 1):
for i in range(-n_points // 2, n_points // 2):
for j in range(-n_points // 2, n_points // 2):

Copilot uses AI. Check for mistakes.
Comment on lines +340 to +358
np.random.seed(coord_seed)

bundles = []
current_bundle = VLIWBundle()

for op_type, op_id in ops:
if not current_bundle.pack(op_type, op_id):
bundles.append(current_bundle)
current_bundle = VLIWBundle()
current_bundle.pack(op_type, op_id)

if current_bundle.packed_ops:
bundles.append(current_bundle)

# Calculate average utilization
avg_util = sum(b.utilization() for b in bundles) / len(bundles) if bundles else 0
return -avg_util # Negative for minimization


Copy link

Copilot AI Jan 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The vliw_packing_objective function modifies global numpy random state by calling np.random.seed(coord_seed) but doesn't restore the original state afterward. This side effect is not documented in the docstring and could cause non-deterministic behavior when this function is called multiple times in a test or benchmark context, as it will affect subsequent numpy random operations. Consider either: (1) documenting this side effect clearly in the docstring, (2) using a local np.random.Generator instance instead of the global state, or (3) saving and restoring the random state.

Suggested change
np.random.seed(coord_seed)
bundles = []
current_bundle = VLIWBundle()
for op_type, op_id in ops:
if not current_bundle.pack(op_type, op_id):
bundles.append(current_bundle)
current_bundle = VLIWBundle()
current_bundle.pack(op_type, op_id)
if current_bundle.packed_ops:
bundles.append(current_bundle)
# Calculate average utilization
avg_util = sum(b.utilization() for b in bundles) / len(bundles) if bundles else 0
return -avg_util # Negative for minimization
rng_state = np.random.get_state()
try:
np.random.seed(coord_seed)
bundles = []
current_bundle = VLIWBundle()
for op_type, op_id in ops:
if not current_bundle.pack(op_type, op_id):
bundles.append(current_bundle)
current_bundle = VLIWBundle()
current_bundle.pack(op_type, op_id)
if current_bundle.packed_ops:
bundles.append(current_bundle)
# Calculate average utilization
avg_util = sum(b.utilization() for b in bundles) / len(bundles) if bundles else 0
return -avg_util # Negative for minimization
finally:
np.random.set_state(rng_state)

Copilot uses AI. Check for mistakes.

import math
import sys
import time
Copy link

Copilot AI Jan 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Import of 'time' is not used.

Suggested change
import time

Copilot uses AI. Check for mistakes.
Comment on lines +22 to +23
PHI, GOLDEN_ANGLE, EPSILON, DEFAULT_VC_GUARD_LIMIT,
penrose_project, phason_flip, bundle_density_objective,
Copy link

Copilot AI Jan 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Import of 'phason_flip' is not used.
Import of 'DEFAULT_VC_GUARD_LIMIT' is not used.

Suggested change
PHI, GOLDEN_ANGLE, EPSILON, DEFAULT_VC_GUARD_LIMIT,
penrose_project, phason_flip, bundle_density_objective,
PHI, GOLDEN_ANGLE, EPSILON,
penrose_project, bundle_density_objective,

Copilot uses AI. Check for mistakes.
Copilot AI requested a review from toolate28 January 22, 2026 19:08
Copilot stopped work on behalf of toolate28 due to an error January 22, 2026 19:08
@toolate28
Copy link
Owner

@copilot please make the required changes
thank you

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants