This issue focuses on researching commercial large language models (LLMs) to understand their:
- Energy consumption (training and inference)
- Carbon footprint
- Water footprint
- Accuracy/performance benchmarks
- Hosting details (e.g., Azure, AWS, Anthropic Cloud)