Skip to content

Commit 7d7753c

Browse files
poldrackclaude
andcommitted
docs: update project tracking files for Week 3 completion
- Updated SCRATCHPAD.md to reflect 100% Week 3 completion - Updated progress statistics: 38% overall (85/221 tasks) - Updated next steps to focus on Week 4 CPU backend - Cleaned up tracking state for completed Data I/O Layer 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
1 parent 9817427 commit 7d7753c

File tree

6 files changed

+242
-216
lines changed

6 files changed

+242
-216
lines changed

SCRATCHPAD.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -8,16 +8,16 @@
88
## Project State Summary (2025-08-27 Refresh)
99

1010
### Current Status
11-
**Branch**: dev/week3-data-io
12-
**Phase**: Week 3 - Data I/O Layer Implementation
13-
**Overall Progress**: 85/221 tasks completed (38%)
14-
**Week 3 Progress**: 42/42 tasks completed (100%)
11+
**Branch**: dev/week3-data-io
12+
**Phase**: Week 3 - Data I/O Layer Implementation
13+
**Overall Progress**: 85/221 tasks completed (38%)
14+
**Week 3 Progress**: 42/42 tasks completed (100%)
1515

1616
### Current Work in Progress
1717
**Data I/O Layer Implementation: ✅ WEEK 3 FULLY COMPLETED**
1818
-**NiftiLoader class COMPLETED** - All functionality implemented and tested
1919
- Full-featured class with lazy loading, chunking, masking support
20-
- Memory optimization features implemented
20+
- Memory optimization features implemented
2121
- Data integrity validation included
2222
- All validation methods (_check_data_integrity, _validate_dimensions, etc.)
2323
- 14 tests all passing (100% pass rate), 90.22% code coverage
@@ -62,7 +62,7 @@
6262
**Week 2 Completion:**
6363
-**Complete Phase 1 Week 2 core architecture** (fully implemented)
6464
- Backend abstraction layer with comprehensive ABC interface
65-
- Core orchestrator for workflow coordination and pipeline management
65+
- Core orchestrator for workflow coordination and pipeline management
6666
- Logging framework with colored output, progress reporting, and system info
6767
- Configuration management using Pydantic with TOML/env variable support
6868
- Error handling framework with hierarchical exceptions and recovery suggestions
@@ -101,7 +101,7 @@
101101
- ✅ Backend abstraction layer (ABC with is_available, compute_glm)
102102
- ✅ Core orchestrator (workflow coordination)
103103
- ✅ Logging framework (colored output, progress reporting)
104-
- ✅ Configuration management (Pydantic with TOML support)
104+
- ✅ Configuration management (Pydantic with TOML support)
105105
- ✅ Error handling framework (hierarchical exceptions)
106106
- ✅ NIfTI I/O layer (full NiftiLoader implementation)
107107

TASKS.md

Lines changed: 13 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -86,11 +86,16 @@
8686
- [x] Add utility functions: load_design_matrix, validate_design_matrix, create_contrast_matrix (2025-08-27)
8787
- [x] Add FSL and SPM format compatibility (2025-08-27)
8888
- [x] Implement Gram-Schmidt orthogonalization for correlated regressors (2025-08-27)
89-
- [ ] Implement contrast file loader
90-
- [ ] Parse contrast matrices from text files
91-
- [ ] Validate contrast compatibility with design
92-
- [ ] Support multiple contrasts
93-
- [ ] Write unit tests
89+
- [x] Implement contrast file loader (2025-08-27)
90+
- [x] Implement src/accelperm/io/contrast.py with ContrastLoader class (2025-08-27)
91+
- [x] Parse contrast matrices from text files (FSL .con, CSV, TXT formats) (2025-08-27)
92+
- [x] Validate contrast compatibility with design matrices (2025-08-27)
93+
- [x] Support multiple contrast files and batch loading (2025-08-27)
94+
- [x] Add contrast validation (rank deficiency, zero contrasts, data integrity) (2025-08-27)
95+
- [x] Add format compatibility (FSL, SPM styles) (2025-08-27)
96+
- [x] Add programmatic contrast creation (t-contrasts, F-contrasts, polynomial) (2025-08-27)
97+
- [x] Write comprehensive unit tests with 21 tests, 100% pass rate (2025-08-27)
98+
- [x] Add utility functions: load_contrast_matrix, validate_contrast_compatibility, create_t_contrast, create_f_contrast (2025-08-27)
9499
- [x] Create output writer module (2025-08-27)
95100
- [x] Implement src/accelperm/io/output.py with OutputWriter class (2025-08-27)
96101
- [x] Support statistical map output to NIfTI format (2025-08-27)
@@ -545,12 +550,12 @@
545550
- Blocked: 0
546551
- **Progress: 0%**
547552

548-
### Week 3 Progress (Data I/O Layer)
553+
### Week 3 Progress (Data I/O Layer) - **COMPLETE!** 🎉
549554
- NIfTI handling: **COMPLETE** (10/10 subtasks)
550555
- Design matrix loader: **COMPLETE** (10/10 subtasks)
551-
- Contrast file loader: **PENDING** (0/4 subtasks)
556+
- Contrast file loader: **COMPLETE** (10/10 subtasks)
552557
- Output writer module: **COMPLETE** (12/12 subtasks)
553-
- **Week 3 Progress: 91%** (32/36 subtasks complete)
558+
- **Week 3 Progress: 100%** (42/42 subtasks complete)
554559

555560
### Overall Project
556561
- **Total tasks: 221**

src/accelperm/io/contrast.py

Lines changed: 36 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -42,28 +42,34 @@ def load(
4242
elif filepath.suffix.lower() in [".txt", ".mat"]:
4343
# Tab or space-separated format
4444
# Handle inconsistent row lengths manually
45-
with open(filepath, "r") as f:
45+
with open(filepath) as f:
4646
lines = f.readlines()
47-
47+
4848
# Parse lines manually to handle inconsistent lengths
4949
rows = []
5050
max_cols = 0
51-
51+
5252
for line in lines:
5353
line = line.strip()
5454
if line: # Skip empty lines
5555
if "\t" in line:
5656
values = line.split("\t")
5757
else:
5858
values = line.split()
59-
59+
6060
# Check for inconsistent number of columns
61-
if self.validate_contrasts and max_cols > 0 and len(values) != max_cols:
62-
raise ValueError("Inconsistent number of regressors in contrast file")
63-
61+
if (
62+
self.validate_contrasts
63+
and max_cols > 0
64+
and len(values) != max_cols
65+
):
66+
raise ValueError(
67+
"Inconsistent number of regressors in contrast file"
68+
)
69+
6470
max_cols = max(max_cols, len(values))
6571
rows.append([float(v) for v in values])
66-
72+
6773
# Convert to DataFrame
6874
data = pd.DataFrame(rows)
6975
else:
@@ -86,7 +92,9 @@ def load(
8692
if self.format_style == "fsl":
8793
contrast_names = [f"C{i+1}" for i in range(contrast_matrix.shape[0])]
8894
else:
89-
contrast_names = [f"contrast_{i+1}" for i in range(contrast_matrix.shape[0])]
95+
contrast_names = [
96+
f"contrast_{i+1}" for i in range(contrast_matrix.shape[0])
97+
]
9098

9199
# Validate contrasts
92100
if self.validate_contrasts:
@@ -101,7 +109,9 @@ def load(
101109
# Check design compatibility if provided
102110
design_compatible = True
103111
if design_info is not None:
104-
design_compatible, _ = validate_contrast_compatibility(contrast_matrix, design_info)
112+
design_compatible, _ = validate_contrast_compatibility(
113+
contrast_matrix, design_info
114+
)
105115

106116
# Prepare result
107117
result = {
@@ -148,38 +158,36 @@ def create_standard_contrasts(self, column_names: list[str]) -> dict[str, Any]:
148158
if col_name.lower() != "intercept":
149159
contrast = np.zeros(len(column_names))
150160
contrast[i] = 1
151-
contrasts["main_effects"].append({
152-
"name": f"{col_name}_effect",
153-
"contrast": contrast
154-
})
161+
contrasts["main_effects"].append(
162+
{"name": f"{col_name}_effect", "contrast": contrast}
163+
)
155164

156165
# Create interaction contrasts (for columns containing 'x' or '_x_')
157166
for i, col_name in enumerate(column_names):
158167
if "x" in col_name.lower() or "_x_" in col_name.lower():
159168
contrast = np.zeros(len(column_names))
160169
contrast[i] = 1
161-
contrasts["interactions"].append({
162-
"name": f"{col_name}_interaction",
163-
"contrast": contrast
164-
})
170+
contrasts["interactions"].append(
171+
{"name": f"{col_name}_interaction", "contrast": contrast}
172+
)
165173

166174
return contrasts
167175

168176
def create_polynomial_contrasts(self, n_levels: int) -> dict[str, np.ndarray]:
169177
"""Create polynomial contrasts for ordered factors."""
170178
contrasts = {}
171-
179+
172180
if n_levels >= 2:
173181
# Linear contrast
174182
linear = np.linspace(-1, 1, n_levels)
175183
contrasts["linear"] = linear
176-
184+
177185
if n_levels >= 3:
178186
# Quadratic contrast
179187
x = np.linspace(-1, 1, n_levels)
180188
quadratic = x**2 - np.mean(x**2)
181189
contrasts["quadratic"] = quadratic
182-
190+
183191
if n_levels >= 4:
184192
# Cubic contrast
185193
cubic = x**3 - np.mean(x**3)
@@ -226,13 +234,17 @@ def validate_contrast_compatibility(
226234
# For orthogonal designs, contrasts should sum to zero for proper interpretation
227235
for i, contrast_row in enumerate(contrast_matrix):
228236
if not np.isclose(np.sum(contrast_row), 0, atol=1e-10):
229-
issues.append(f"Contrast {i+1} does not sum to zero (may not be interpretable)")
237+
issues.append(
238+
f"Contrast {i+1} does not sum to zero (may not be interpretable)"
239+
)
230240

231241
is_compatible = len(issues) == 0
232242
return is_compatible, issues
233243

234244

235-
def create_t_contrast(column_names: list[str], contrast_spec: dict[str, float]) -> np.ndarray:
245+
def create_t_contrast(
246+
column_names: list[str], contrast_spec: dict[str, float]
247+
) -> np.ndarray:
236248
"""Create t-contrast vector from column names and specification."""
237249
n_regressors = len(column_names)
238250
contrast = np.zeros(n_regressors)
@@ -258,4 +270,4 @@ def create_f_contrast(
258270
for i, contrast_spec in enumerate(contrast_specs):
259271
f_contrast[i, :] = create_t_contrast(column_names, contrast_spec)
260272

261-
return f_contrast
273+
return f_contrast

src/accelperm/io/nifti.py

Lines changed: 23 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
"""NIfTI file I/O operations for AccelPerm."""
22

33
from pathlib import Path
4-
from typing import Any, Dict, Iterator, Optional, Tuple, Union
4+
from typing import Any
55

66
import nibabel as nib
77
import numpy as np
@@ -10,16 +10,18 @@
1010
class NiftiLoader:
1111
"""Loader for NIfTI files with memory optimization and validation."""
1212

13-
def __init__(self, lazy_loading: bool = False, chunk_size: Optional[int] = None) -> None:
13+
def __init__(
14+
self, lazy_loading: bool = False, chunk_size: int | None = None
15+
) -> None:
1416
self.lazy_loading = lazy_loading
1517
self.chunk_size = chunk_size
1618

17-
def load(self, filepath: Path, mask: Optional[Path] = None) -> Dict[str, Any]:
19+
def load(self, filepath: Path, mask: Path | None = None) -> dict[str, Any]:
1820
"""Load NIfTI file and return data with metadata."""
1921
try:
2022
img = nib.load(str(filepath))
2123
except FileNotFoundError:
22-
raise FileNotFoundError(f"NIfTI file not found: {filepath}")
24+
raise FileNotFoundError(f"NIfTI file not found: {filepath}") from None
2325
except Exception as e:
2426
raise ValueError(f"Invalid NIfTI file: {filepath}") from e
2527

@@ -32,7 +34,7 @@ def load(self, filepath: Path, mask: Optional[Path] = None) -> Dict[str, Any]:
3234
else:
3335
data = img.get_fdata()
3436
self._check_data_integrity(data)
35-
37+
3638
result = {
3739
"data": data,
3840
"affine": img.affine,
@@ -50,41 +52,35 @@ def load(self, filepath: Path, mask: Optional[Path] = None) -> Dict[str, Any]:
5052
mask_data = mask_img.get_fdata().astype(bool)
5153
result["mask"] = mask_data
5254
result["n_voxels"] = np.sum(mask_data)
53-
55+
5456
if not self.lazy_loading:
5557
# Apply mask to data
56-
if data.ndim == 4:
57-
masked_data = data[mask_data, :]
58-
else:
59-
masked_data = data[mask_data]
58+
masked_data = data[mask_data, :] if data.ndim == 4 else data[mask_data]
6059
result["masked_data"] = masked_data
6160

6261
return result
6362

64-
def load_chunked(self, filepath: Path) -> Dict[str, Any]:
63+
def load_chunked(self, filepath: Path) -> dict[str, Any]:
6564
"""Load NIfTI file in chunks for memory efficiency."""
6665
try:
6766
img = nib.load(str(filepath))
6867
except FileNotFoundError:
69-
raise FileNotFoundError(f"NIfTI file not found: {filepath}")
68+
raise FileNotFoundError(f"NIfTI file not found: {filepath}") from None
7069
data = img.get_fdata()
71-
70+
7271
# Calculate number of chunks based on chunk_size
7372
total_voxels = np.prod(data.shape[:3])
74-
if self.chunk_size is None:
75-
chunk_size = 1000
76-
else:
77-
chunk_size = self.chunk_size
78-
73+
chunk_size = 1000 if self.chunk_size is None else self.chunk_size
74+
7975
total_chunks = int(np.ceil(total_voxels / chunk_size))
80-
76+
8177
def chunk_iterator():
8278
for i in range(total_chunks):
8379
start_idx = i * chunk_size
8480
end_idx = min((i + 1) * chunk_size, total_voxels)
8581
# Return a chunk of data
8682
yield data.flat[start_idx:end_idx]
87-
83+
8884
return {
8985
"chunk_iterator": chunk_iterator(),
9086
"total_chunks": total_chunks,
@@ -110,7 +106,7 @@ def _validate_spatial_match(self, data1: np.ndarray, data2: np.ndarray) -> None:
110106
raise ValueError("Spatial dimensions do not match")
111107

112108

113-
def load_nifti(filepath: Path) -> Tuple[np.ndarray, np.ndarray, Any]:
109+
def load_nifti(filepath: Path) -> tuple[np.ndarray, np.ndarray, Any]:
114110
"""Load NIfTI file and return data, affine, and header."""
115111
img = nib.load(str(filepath))
116112
data = img.get_fdata()
@@ -123,20 +119,19 @@ def save_nifti(data: np.ndarray, affine: np.ndarray, filepath: Path) -> None:
123119
"""Save data as NIfTI file."""
124120
# Create output directory if it doesn't exist
125121
filepath.parent.mkdir(parents=True, exist_ok=True)
126-
122+
127123
# Create NIfTI image and save
128124
img = nib.Nifti1Image(data, affine)
129125
img.to_filename(str(filepath))
130126

131127

132-
def validate_nifti_compatibility(img1_info: Dict[str, Any], img2_info: Dict[str, Any]) -> bool:
128+
def validate_nifti_compatibility(
129+
img1_info: dict[str, Any], img2_info: dict[str, Any]
130+
) -> bool:
133131
"""Validate that two NIfTI images are compatible for processing."""
134132
# Check spatial dimensions match
135133
if img1_info["spatial_shape"] != img2_info["spatial_shape"]:
136134
return False
137-
135+
138136
# Check affine matrices are similar (allowing for small floating point differences)
139-
if not np.allclose(img1_info["affine"], img2_info["affine"], rtol=1e-6):
140-
return False
141-
142-
return True
137+
return np.allclose(img1_info["affine"], img2_info["affine"], rtol=1e-6)

0 commit comments

Comments
 (0)