Welcome to the results repository for our paper "Assertiveness-based Agent Communication for a Personalized Medicine on Medical Imaging Diagnosis" (10.1145/3544548.3580682) in proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI '23) presented during the "AI in Health" track. In this paper, we explore how human-AI interactions are affected by the ability of an AI agent to not only incorporate granular patient information from the AI outputs (e.g., dataset-uta7-annotations
, dataset-uta11-rates
, or dataset-uta11-findings
repositories) but also exploring how to adapt the communication tone (i.e., more assertive or suggestive) depending on the medical experience (i.e., novice or expert) of the clinician. Specifically, we compare the AI outputs that explain to clinicians some clinical arguments (e.g., dataset-uta7-co-variables
, or dataset-uta11-findings
repositories) with more granular information about the patient regarding the lesion details, to a conventional agent (i.e., prototype-breast-screening
repository) that only provides numeric estimates (e.g., BIRADS and accuracy) of the classification. The study was conducted using a dataset of medical images (e.g., dataset-uta7-dicom
, or dataset-uta11-dicom
repositories) and patient information, where the AI models (e.g., densenet-breast-classifier
, ai-classifier-densenet161
, ai-segmentation-densenet
, or ai-nns-mri
repositories) were trained to classify and segment the images based on various features. The data and source code used in this study are available in this repository, along with a detailed explanation of the methods and results. We hope that this work will contribute to the growing field of human-AI interactions in the medical field, and help to improve the communication between AI systems and clinicians.
In this repository, we present our results for applying the BreastScreening-AI framework in two conditions, where clinicians will interact with conventional (e.g., prototype-breast-screening
repository) and assertiveness-based (e.g., prototype-assertive-reactive
and prototype-non-assertive-reactive
repositories) intelligent agents. The assistant is acting as a second reader, where we compared both conventional and assertiveness-based agents in the context of assisting trained medical personnel for the task of a breast cancer diagnosis. To organize our user evaluations, we devide each study in a group of User Tests and Analysis (UTA) to guide us during these studies. For this repository, used data from the 7th (UTA7) guide and the 11th (UTA11) guide. As follows, some details are provided for more information about these guides.
On the next animation, we present a demo of the prototypes:
Our results are the joint data of the two UTA7 (10.13140/RG.2.2.16566.14403/1) and UTA11 (10.13140/RG.2.2.22989.92645/1) guides. During this UTA11 study, we used several repositories to store the source code of our prototypes, later tested as proof-of-concepts under this research work. The prototypes are available in the prototype-assertive-proactive
, prototype-assertive-reactive
, prototype-non-assertive-proactive
, and prototype-non-assertive-reactive
repositories. Each prototype was deployed on a remote server for testing purposes.
These results are representing the pieces of information of both BreastScreening and MIDA projects. These projects are research projects that deal with the use of a recently proposed technique in literature: Deep Convolutional Neural Networks (CNNs). From a developed User Interface (UI) and framework, these deep networks will incorporate several datasets in different modes. You can find the deployed prototypes on the Technical.md
file of the meta-private
repository. More information about the study is also available on the User-Research.md
file of this meta-private
repository. Unfortunately, you need to be a member of our team to access the restricted information. We also have several channels and demos to see in our YouTube Channel, please follow us!
We kindly ask scientific works and studies that make use of the repository to cite it in their associated publications. Similarly, we ask open-source and closed-source works that make use of the repository to warn us about this use.
You can cite our work using the following BibTeX entry:
@inproceedings{10.1145/3544548.3580682,
author = {Calisto, Francisco Maria and Fernandes, Jo\~{a}o and Morais, Margarida and Santiago, Carlos and Abrantes, Jo\~{a}o Maria and Nunes, Nuno and Nascimento, Jacinto C.},
title = {Assertiveness-Based Agent Communication for a Personalized Medicine on Medical Imaging Diagnosis},
year = {2023},
isbn = {9781450394215},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3544548.3580682},
doi = {10.1145/3544548.3580682},
abstract = {Intelligent agents are showing increasing promise for clinical decision-making in a variety of healthcare settings. While a substantial body of work has contributed to the best strategies to convey these agentsโ decisions to clinicians, few have considered the impact of personalizing and customizing these communications on the cliniciansโ performance and receptiveness. This raises the question of how intelligent agents should adapt their tone in accordance with their target audience. We designed two approaches to communicate the decisions of an intelligent agent for breast cancer diagnosis with different tones: a suggestive (non-assertive) tone and an imposing (assertive) one. We used an intelligent agent to inform about: (1) number of detected findings; (2) cancer severity on each breast and per medical imaging modality; (3) visual scale representing severity estimates; (4) the sensitivity and specificity of the agent; and (5) clinical arguments of the patient, such as pathological co-variables. Our results demonstrate that assertiveness plays an important role in how this communication is perceived and its benefits. We show that personalizing assertiveness according to the professional experience of each clinician can reduce medical errors and increase satisfaction, bringing a novel perspective to the design of adaptive communication between intelligent agents and clinicians.},
booktitle = {Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems},
articleno = {13},
numpages = {20},
keywords = {Clinical Decision Support System, Healthcare, Breast Cancer},
location = {Hamburg, Germany},
series = {CHI '23}
}
The following list is showing the required dependencies for this project to run locally:
- Git or any other Git or GitHub version control tool
- Python (v3.5 or newer)
- NodeJS (v16.14.1 or newer)
Here are some tutorials and documentation, if needed, to feel more comfortable about using and playing around with this repository:
Usage follow the instructions here to setup the current repository and extract the present data. To understand how the hereby repository is used for, read the following steps.
At this point, the only way to install this repository is manual. Eventually, this will be accessible through pip
and npm
or any other package managers, as mentioned on the roadmap.
Nonetheless, this kind of installation is as simple as cloning this repository. Virtually all Git and GitHub version control tools are capable of doing that. Through the console, we can use the command below, but other ways are also fine.
git clone https://github.com/MIMBCD-UI/sa-uta11-results.git
Please, feel free to run one of our statistical method. It is a script called basic_statistics.py
at the src/methods/
directory. It can be used as follows:
python src/methods/basic_statistics.py
Just keep in mind this are just basic statistics, so it does nothing more than computing basic statistics such as mean and standard deviation for various subsets of data. Also, we did our best to make the basic statistics as user-friendly as possible, so, above everything else, have fun! ๐
We need to follow the repository goal, by addressing the thereby information. Therefore, it is of chief importance to scale this solution supported by the repository. The repository solution follows the best practices, achieving the Core Infrastructure Initiative (CII) specifications.
Besides that, one of our goals involves creating a configuration file to automatically test and publish our code to pip
or any other package manager. It will be most likely prepared for the GitHub Actions. Other goals may be written here in the future.
This project exists thanks to all the people who contribute. We welcome everyone who wants to help us improve this repository. Any question, comment or feedback you have, you can always open a new Discussion for that topic, or follow what we already have. For instance, we have a CHI'23 Q&A Discussion for the #CHI2023 conference. As follows, we present some suggestions.
Either as something that seems missing or any need for support, just open a new issue. Regardless of being a simple request or a fully-structured feature, we will do our best to understand them and, eventually, solve them.
We like to develop, but we also like collaboration. You could ask us to add some features... Or you could want to do it yourself and fork this repository. Maybe even do some side-project of your own. If the latter ones, please let us share some insights about what we currently have.
The current information will summarize important items of this repository. In this section, we address all fundamental items that were crucial to the current information.
To publish our datasets we used a well known platform called Kaggle. To access these datasets just follow the uta4-sm-vs-mm-sheets
dataset, as an example. Here, you will find all of our published datasets and any associated information, such as descriptions and download links.
Copyright ยฉ 2023 Instituto Superior Tรฉcnico
The sa-uta11-results
repository is distributed under the terms of both Academic License for academic purpose and Commercial License for commercial purpose, as well as under the CC-BY-SA-4.0 copyright. The content of the present repository has obtained the patent right of World Intellectual Property Organization (WIPO) invention. Moreover, the hereby invention for this repository is under protection of the patent number WO2022071818A1 with the application number PCT/PT2021/050029. The title of the invention is "Computational Method and System for Improved Identification of Breast Lesions", registered under the WO patent office.
See ACADEMIC and COMMERCIAL for details. For more information about the MIMBCD-UI Project just follow the link.
Our team brings everything together sharing ideas and the same purpose, developing even better work. In this section, we will nominate the full list of important people for this repository, as well as respective links.
-
Francisco Maria Calisto [ Academic Website | ResearchGate | GitHub | Twitter | LinkedIn ]
-
Joรฃo Fernandes [ ResearchGate ]
-
Margarida Morais [ ResearchGate ]
-
Carlos Santiago [ ResearchGate ]
-
Joรฃo Maria Abrantes [ ResearchGate ]
-
Nuno Nunes [ ResearchGate ]
-
Jacinto C. Nascimento [ ResearchGate ]
- Hugo Lencastre
- Nรกdia Mourรฃo
- Miguel Bastos
- Pedro Diogo
- Joรฃo Bernardo
- Madalena Pedreira
- Mauro Machado
- Bruno Dias
- Bruno Oliveira
- Luรญs Ribeiro Gomes
- Pedro Miraldo
This work was partially supported by national funds by FCT through both UID/EEA/50009/2013 and LARSyS - FCT Project 2022.04485.PTDC (MIA-BREAST) projects hosted by IST, as well as both BL89/2017-IST-ID and PD/BD/150629/2020 grants. We thank Dr. Clara Aleluia and her radiology team of HFF for valuable insights and helping using the assistants on their daily basis. Further acknowledgments are provided inside the ACKNOWLEDGMENTS.md
file of the sa-uta11-results
repository. Additionally, we are grateful for the invaluable assistance provided by our colleagues of the HCII @ CMU. We are indebted to those who gave their time and expertise to evaluate our work, who among others are giving us crucial information for the BreastScreening project.
Our organization is a non-profit organization. However, we have many needs across our activity. From infrastructure to service needs, we need some time and contribution, as well as help, to support our team and projects.
This project exists thanks to all the people who contribute. [Contribute].
Thank you to all our backers! ๐ [Become a backer]
Support this project by becoming a sponsor. Your logo will show up here with a link to your website. [Become a sponsor]