Survey of Small Language Models from Penn State, ...
-
Updated
Aug 24, 2025
Survey of Small Language Models from Penn State, ...
Open-source test management and generation for conversational Gen AI applications. Build and run context-specific test sets. Collaborate with subject matter experts to ensure relevance and quality.
Deep Fact Validation
Provides web credibility models (Likert scale) to assign a trustworthiness score to a given website.
In this paper, we introduce SAShA, a new attack strategy that leverages semantic features extracted from a knowledge graph in order to strengthen the efficacy of the attack to standard CF models. We performed an extensive experimental evaluation in order to investigate whether SAShA is more effective than baseline attacks against CF models by ta…
In the dynamic landscape of medical artificial intelligence, this study explores the vulnerabilities of the Pathology Language-Image Pretraining (PLIP) model, a Vision Language Foundation model, under targeted attacks like PGD adversarial attack.
a matrix to provide the clarified definition and relationship information of trustworthiness characteristics between in the AI/ML standards
Trustworthiness Monitoring & Assessment Framework
A list of tools and methods for building trustworthy software following TrustOps principles.
Codes and Datasets for our WSDM 2022 Paper: "MTLTS: A Multi-Task Framework To Obtain Trustworthy Summaries From Crisis-Related Microblogs"
Visualization and embedding of large datasets using various Dimensionality Reduction (DR) techniques such as t-SNE, UMAP, PaCMAP & IVHD. Implementation of custom metrics to assess DR quality with complete explaination and workflow.
Independent continuation of a project from AstonHack 2017
Website for health data science at KDD 2021
CodeGenLink is a Visual Studio Code extension that interacts with GitHub Copilot Chat to generate code, analyze its origin, and identify the associated license.
[USENIX Security 2025] Topic-FlipRAG: Topic-Orientated Adversarial Opinion Manipulation Attacks to Retrieval-Augmented Generation Models
Secure and trustworthy mobile AI.
Proposal of a novel adversarial attack approach, called Target Adversarial Attack against Multimedia Recommender Systems (TAaMR), to investigate the modification of MR behavior when the images of a category of low recommended products (e.g., socks) are perturbed to misclassify the deep neural classifier towards the class of more recommended prod…
This repository is an implementation of the paper "Trustworthy Medical Image Segmentation with improved performance for in-distribution samples" published in Neural Networks.
In this work, we provide 24 combinations of attack/defense strategies, and visual-based recommenders to 1) access performance alteration on recommendation and 2) empirically verify the effect on final users through offline visual metrics.
Component M - Trustworthiness Monitoring & Assessment Framework
Add a description, image, and links to the trustworthiness topic page so that developers can more easily learn about it.
To associate your repository with the trustworthiness topic, visit your repo's landing page and select "manage topics."