-
Couldn't load subscription status.
- Fork 0
refactor: Improve skimming and metadata code organization, naming, and S3 support #8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
MoAly98
wants to merge
47
commits into
main
Choose a base branch
from
maly-demo
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
- Add Dataset dataclass to encapsulate logical datasets across multiple directories - Support multiple directories with corresponding cross-sections per dataset - Always create separate fileset entries for multi-directory datasets - Histograms naturally accumulate during analysis (no explicit aggregation needed) - Update metadata extraction to handle directory/cross-section mapping - Update skimming to populate Dataset.events with per-directory metadata - Update analysis pipeline to process Dataset objects instead of dict - Add all CMS datasets to skim_demo.py config with cross-section extraction helper 🤖 Generated with [Claude Code](https://claude.com/claude-code)
- Update skimming cells to use Dataset objects instead of fileset dict - Update analysis cells to iterate over Dataset objects - Update output display to show Dataset structure with splits 🤖 Generated with [Claude Code](https://claude.com/claude-code)
2018 has runs A,B,C,D while 2016/2017 have B,C,D,E,F
…f files written on coffea casa (no s3 integration)
…tocols and format
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's make sure we squash when merging as there are some fairly big files in the history here.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Overview
This PR refactors the skimming and metadata extraction subsystems to improve code maintainability, discoverability, and user experience. The changes focus on clearer naming, better documentation, logical code organization, and robust S3 storage support for distributed processing.
Key Improvements
1. Naming & Documentation
2. Code Organization
utils/skimming.py(1094 lines) into 7 logical sections with clear headers3. S3 Storage Support
WorkerEvalclass for worker-side environment variable evaluation_resolve_lazy_values()for recursive lazy evaluationexample_cms/configs/skim.py4. Interactive Demo
demo_workflow.ipynbwith full workflow demonstrationBreaking Changes
Function Renamings
utils/metadata_extractor.py:
_parse_dataset()→parse_dataset_key()summarise_nanoaods()→summarize_event_counts()utils/skimming.py:
workitem_analysis()→process_workitem()reduce_results()→merge_results()_build_output_suffix()→_build_output_path()process_workitems_with_skimming()→process_and_load_events()New Constants
Files Changed
Core utilities:
utils/schema.py- AddedWorkerEvalclassutils/skimming.py- Complete reorganization + lazy evaluation supportutils/metadata_extractor.py- Improved naming and documentationutils/datasets.py- Minor updates for consistencyConfiguration:
example_cms/configs/skim.py- New file with S3 storage configurationexample_cms/configs/configuration.py- New consolidated configexample_opendata/configs/*.py- Updated configs for opendata exampleDocumentation:
demo_workflow.ipynb- New interactive demonstration notebookREADME.md- Updated to reflect new structureEntry points:
analysis.py- Updated to use new function namesdev/dev_test_skimming*.py- Updated to use new function namesMigration Guide
For Users
Update function calls in your code:
For S3 Storage
Configure worker-side credentials using
WorkerEval: