fairUX is a system that detects cognitive-inclusivity bugs in user interfaces using AI-powered reasoning grounded in Inclusive Design Research
AWS Bedrock and AWS CLI setup needs to be done before running the server and the client locally.
To use LLMs like Claude or LLaMA via Amazon Bedrock, you must configure AWS CLI and use your own access tokens.
Follow the official instructions: https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html
Run the following and enter your credentials:
aws configure
You will need:
- AWS Access Key ID
- AWS Secret Access Key
- Default Region (e.g.,
us-east-1
) - Output Format (e.g.,
json
)
Ensure your IAM user has
bedrock:InvokeModel
permission and access to Amazon Bedrock.
Make sure your AWS account has access to Amazon Bedrock and has requested access to the desired models: https://console.aws.amazon.com/bedrock/home
Follow the official instructions to install git based on the operating system you have: https://git-scm.com/downloads
Clone the repository running the command:
git clone https://github.com/EPICLab/fairUX.git
cd fairUX
- If using Bedrock: Update the following in
server/BedrockClient.py
):
LLM_MODELS = {
"LLAMA-3": "us.meta.llama3-2-3b-[xyz]",
"CLAUDE-3.5": "anthropic.claude-3-5-sonnet-[xyz]-v[x]:0",
"CLAUDE-3.7": "anthropic.claude-3-7-sonnet-[xyz]-v[x]:0"
}
Here [xyz] are placeholders. To use these models, you must provide your own AWS credentials (i.e. model IDs) and ensure your account is authorized for the corresponding Bedrock models.
- [Optional] You can use GPT Model Family Invocation via OpenAI API [Optional] As an alternative to Amazon Bedrock, you can configure this system to use OpenAI’s GPT models via the OpenAI API.
If you choose to use GPT models via the OpenAI API instead of Bedrock, you might need to adjust how responses are parsed in the reasoning pipeline. OpenAI's reasoning model response format may differ from Bedrock's. For all models that are natively available through Bedrock (e.g., Claude, LLaMA), no changes are needed.
-
Install OpenAI Python SDK in your virtual environment
pip install openai
-
You need an OpenAI account and API key: https://platform.openai.com/account/api-keys
-
The OpenAI API key must be set as an environment variable so the system can authenticate API requests.
export OPENAI_API_KEY="your-api-key-here"
On Windows (CMD):
set OPENAI_API_KEY=your-api-key-here
- Sample GPT API Invocation
Below is an example of how to invoke GPT-4 from Python using the OpenAI SDK. You can adapt this logic similar to call_claude function in your BedrockClient.py
if switching to OpenAI:
import os
from openai import OpenAI
openai.api_key = os.getenv("OPENAI_API_KEY")
client = OpenAI()
completion = client.chat.completions.create(
model="gpt-4.1",
messages=[
{
"role": "user",
"content": "Write a one-sentence bedtime story about a unicorn."
}
]
)
print(completion.choices[0].message.content)
Check out OpenAI platform for more details: https://platform.openai.com/docs/guides/text?api-mode=chat
-
Navigate to the server directory:
cd server
-
Create and activate a virtual environment:
python3.11 -m venv venv_py311 source venv_py311/bin/activate # On Windows: venv_py311\Scripts\activate
-
Install Python dependencies:
pip3 install -r requirements.txt python3 -m playwright install
-
Start the server:
python3 app.py
-
Navigate to the client directory:
cd client
-
Ensure Node.js and npm are installed.
If not, install from: https://nodejs.org/ -
Install required packages:
npm install
-
Add UUID types for development:
npm i --save-dev @types/uuid
-
Start the frontend development server:
npm start
-
You are all set. You can access the tool in your browser:
http://localhost:3000/