This working group started as a result of discussions at Supercomputing'23 and spans HPC centers both in academia and at national laboratories. The main goals for this working group are to:
- Share best practices, experiences, and ideas,
- Share upcoming user training and outreach events,
- Share collaboration opportunities (e.g., on training series),
- Share training materials (e.g., to avoid duplication), and
- Explore platforms for sharing with the broader community (e.g., HPC-ED).
There are several ways you can get involved:
- Join the Slack community
- Participate in monthly meetings
- Share your training events and materials
If you have any questions or would like to contribute, please feel free to open an issue or send a pull request.
John K. Holmen @jholmen
2025-09-23
-
Discussed training opportunities
- All ready for ParaView training
- CMake training pending decision on funding
- LaTeX training available as part of Overleaf subscription
-
Discussed the BPHTE25 paper submission
- Full paper submitted
- Initial slides submitted
- Edits welcome to finalize slides
2025-08-26
-
Discussed training opportunities
- Advertisement page live for ParaView training
- Who can contribute funds for CMake training?
- Who is interested in Chapel training?
-
Discussed the BPHTE25 paper submission
- Extended abstract submitted
- Full passes needed before paper submission
-
Discussed public presence
- Host a BoF at SC or ISC?
- Host a training collaborator matching event?
-
Discussed HPC Carpentry
- Shared experiences with past events
- Discussed training models and certification
-
Shared resources
2025-07-22
- Did not meet
2025-06-24
- Discussed user vetting
- How to handle walk-in training events?
- Physical tokens or proxy accounts?
- Discussed training opportunities
- How to split CMake training cost?
- Discussion to plan ParaView training soon
- Potential opportunity for Chapel training
- Discussed the BPHTE25 paper submission
- Center-specific drafts started
- Cross-center collaborations next
2025-05-27
-
Discussed training advertisements
- Current Slack-based approach works well for centers
- Cross-center training calendar not needed at the time
- Targeted event-specific announcements bring most registrants
-
Discussed training opportunities
- Cross-center CMake training discussions in progress
- Potential opportunity for ParaView training
-
Discussed a BPHTE25 paper submission
- https://sighpceducation.acm.org/events/bphte25cfp/
- Highlight individual and collaborative efforts across centers
- Highlight challenges and lessons learned through the group
- Highlight other collaborative efforts in HPC education and training
2025-04-22
- Discussed a USRSE'25 short talk submission
- https://us-rse.org/usrse25/
- Highlight challenges and lessons learned through the group
- Highlight ways we've found to collaborate
2025-03-25
- Did not meet
2025-02-25
-
Discussed ways to store training materials
- Box, Dropbox, GitHub, Google Drive
- No preference, let contributors decide when linking materials
-
Discussed updating last year's training spreadsheet
- Easy way to stay up to date on each other's efforts
- Continue to maintain as a Google Sheet?
- Document on GitHub instead?
-
Discussed HPC-ED for sharing training materials
- https://hpc-ed.github.io/
- Federated repository with many ways to contribute
- Already maintain a Google Sheet, update for HPC-ED?
-
Discussed monthly meeting notes
- Cumbersome to follow through email
- Document on GitHub instead
2025-01-28
-
Discussed common goals
- Key goal to stay up to date on each other's efforts
- Set group goals for 2025
-
Discussed use of a GitHub repository to share training materials
- Provides a linkable, centralized location
- Eases collaborative development
- What does material licensing look like?
-
Discussed creation of a GitHub organization for the group
- Provides a referenceable public presence
- Create a working group repository under the OLCF organization?
- Yes, create one similar to the HPC System Test Working Group
-
Discussed upcoming events across centers
- Cornell: Scientific Computing Training Series
- NERSC: Deep Learning at Scale Training
- OLCF: New User Training
-
Discussed how training allocations work across centers
- Training tokens vs. traditional logins