Skip to content

AI Ethics reflection tool grounded in global frameworks, and with trauma-informed, ADA-conscious, and survivor-safety toggles. Built for product, data, and legal teams. Powered by the REFLECT AI Framework v1.0.

License

Notifications You must be signed in to change notification settings

reshmakumar101/ai-ethics-checker

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

52 Commits
 
 
 
 
 
 
 
 

Repository files navigation

REFLECT AI Framework v1.0

DOI SSRN Preprint Prior Art AI Ethics Checker Ethics Tagger GPT Cite This Work License: All Rights Reserved

REFLECT AI is a globally grounded, role-based ethical risk reflection framework for teams building and deploying AI systems. It provides role-based prompts, guided by global standards, to support Responsible AI practice. Trauma-informed, accessibility-conscious, and built for real-world impact.

🔍 What is the REFLECT AI Framework?

The REFLECT AI Framework v1.0Responsible Evaluation Framework for Legal, Ethical, Compliance & Trust in AI — is a practical, structured reflection method for product, data, legal, UX, and business teams. It helps teams identify and mitigate ethical risks before launching AI systems, especially in sensitive domains such as healthcare, HR, education, or social services.

Developed by the creator of the REFLECT AI Ethics Checker GPT, this framework includes an optional trauma-informed, ADA-conscious, and survivor-safety reflection mode to help teams account for accessibility, user harm, and psychological risk in emotionally complex settings.

This tool is designed as an internal-use reflection aid, especially for organizations navigating responsible AI practices in environments where user vulnerability, regulatory scrutiny, or ethical ambiguity may arise.

🌍 What Makes This Framework Unique?

Unlike technical “self-reflection” methods for LLMs or robotics, the REFLECT AI Framework is built for human teams designing real-world AI systems—with practical prompts, role-specific outputs, and actionable categories.

It draws from globally recognized standards, including:

  • NIST AI Risk Management Framework (RMF)
  • EU AI Act
  • OECD AI Principles
  • IEEE Ethically Aligned Design

This framework is designed to be empowering, not punitive—focusing on practical reflection, transparency, and repair.

📂 Framework Categories

The REFLECT AI Framework guides reflection across six core categories. If enabled, trauma- and accessibility-aware prompts are embedded within each category to surface additional ethical considerations for vulnerable users or high-impact settings.

Category Icon Description
Fairness & Bias 🎯 Equity across demographics and access
Privacy & Consent 🔐 Informed consent, data protection
Transparency & Explainability 🔍 Clarity on how decisions are made
Accountability 👤 Responsibility for outcomes
Legal & Regulatory Risk ⚖️ Exposure to laws and compliance gaps
Social & Environmental 🌍 Societal effects, long-term impact

✅ Example Output Table

Category Risk Level Notes
🎯 Fairness & Bias ⚠️ Medium No demographic testing yet
🔐 Privacy & Consent ✅ Low Opt-in and deletion supported
🔍 Transparency ❌ High No explanation mechanism for users
👤 Accountability ⚠️ Medium Responsibility is unclear if system fails
⚖️ Legal & Regulatory Risk ❌ High May trigger “high-risk” status under EU AI Act
🌍 Social & Environmental ✅ Low Minimal societal impact and low resource usage

Risk Level Key:
✅ Low • ⚠️ Medium • ❌ High

🧠 How to Use It

You can:

  • Use the AI Ethics Assistant GPT to walk through a guided reflection
  • Embed this framework into internal AI checklists, design reviews, or compliance workflows
  • Customize the categories or risk thresholds to align with your domain or regulatory environment
  • Enable trauma-informed mode to surface accessibility, safety, and psychological harm risks for vulnerable users

🔖 New Tool: REFLECT Ethics Tagger GPT

Now available is the REFLECT Ethics Tagger, a companion GPT that flags ethical risks in AI-generated outputs using the REFLECT Framework.

While the REFLECT AI Ethics Checker GPT supports early-stage reflection during AI system design, the Ethics Tagger reviews already-generated content — such as chatbot responses, summaries, or AI-written outputs — and flags:

  • ⚠️ Bias and fairness issues
  • 🚫 Trauma-sensitive or emotionally risky content
  • ♿ Accessibility/ADA concerns
  • 🔐 Privacy and regulatory risks
  • ❓ Deceptiveness or lack of transparency

This GPT accepts pasted output (text or descriptions of images/videos) and generates a markdown-formatted risk summary, tailored to the user’s role if provided (e.g., PM, UX, legal).

👉 Try the REFLECT AI Ethics Tagger GPT.

Additional Materials

  • REFLECT → NIST AI RMF Crosswalk
    A mini-project connecting the REFLECT AI Framework with the NIST AI RMF functions, showing how role-based and trauma-informed prompts can turn high-level standards into practical guardrails for teams.

⚠️ Disclaimer

This reflection framework and tools are educational and exploratory in nature. They do not constitute legal advice, mental health guidance, or compliance certification.

👤 Attribution & 📜 Usage Notice

The REFLECT AI Framework is an original work created by the developer of the REFLECT AI Ethics Checker GPT.
All framework materials, documentation, and associated content in this repository are © 2025 by the author.

This work is shared publicly for educational and internal research purposes only.

  • Attribution: Clear credit must be provided to the author when referencing, sharing, or adapting any portion of this framework.
  • Restrictions: No license is granted for commercial use, product implementation, or derivative commercialization without the author’s prior written permission.
  • Rights Reserved: All intellectual property rights in and to the described methods are expressly reserved.

🛡️ Personal Project Disclaimer

This project was created entirely outside of work, using personal time and resources.
It is not affiliated with or representative of any employer, past or present. Its purpose is to support thoughtful dialogue and practical approaches to Responsible AI—not to critique or target any specific system or organization.

🧾 Prior Art Declaration

This README serves as a public record of authorship and publication of the REFLECT AI Framework v1.0, developed by the creator of the REFLECT AI Ethics Checker GPT.

The REFLECT AI Framework is:

  • A structured, role-adaptive ethics reflection method
  • Grounded in global standards (NIST, EU AI Act, OECD, IEEE)
  • Designed to guide AI teams through key categories of ethical risk
  • Delivered via a conversational GPT tool with tailored outputs

To the best of the creator’s knowledge, no substantially similar publicly released or patented system existed prior to this publication.

Date of Public Release: July 2025
Platform of Release: GitHub and public link via ChatGPT
Project Type: Independent personal project


Publications

How to Cite this Repository

If you reference the REFLECT AI Framework, please cite:

APA:
Kumar, R. (2025). The REFLECT Framework: Role-Based Ethical Risk Reflection for Responsible AI. SSRN. https://dx.doi.org/10.2139/ssrn.5403038

BibTeX:
@article{kumar2025reflect,
title = {The REFLECT Framework: Role-Based Ethical Risk Reflection for Responsible AI},
author = {Kumar, R.},
year = {2025},
journal = {SSRN},
doi = {10.2139/ssrn.5403038},
url = {https://ssrn.com/abstract=5403038}
}

RIS:
TY - JOUR
AU - Kumar, R.
PY - 2025
TI - The REFLECT Framework: Role-Based Ethical Risk Reflection for Responsible AI
JO - SSRN
DO - 10.2139/ssrn.5403038
UR - https://ssrn.com/abstract=5403038
ER -


Version: 1.0 – July 2025

Note: Early versions of this framework received acknowledgment within the OpenAI community. This mention is for context only and does not imply endorsement, affiliation, or sponsorship.


⚠️ Disclaimer
This repository contains original research authored by Reshma Kumar in a personal capacity.
It is intended for academic and educational use only. Redistribution or commercial use is prohibited.

About

AI Ethics reflection tool grounded in global frameworks, and with trauma-informed, ADA-conscious, and survivor-safety toggles. Built for product, data, and legal teams. Powered by the REFLECT AI Framework v1.0.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published