Benchmarking large language models for short answer grading in a fine-grained, multi-subject, and human-aligned setting.
-
Updated
May 15, 2025 - Python
Benchmarking large language models for short answer grading in a fine-grained, multi-subject, and human-aligned setting.
Clinical trial application for mental health benchmark evaluation of AI responses in multi-turn conversations. Guides users to understand AI interaction patterns and resolve personal mental health issues through therapeutic AI assistance.
Add a description, image, and links to the benchmark-evaluation-llms topic page so that developers can more easily learn about it.
To associate your repository with the benchmark-evaluation-llms topic, visit your repo's landing page and select "manage topics."