Skip to content

Commit

Permalink
Merge branch 'dev' into marcus_brach
Browse files Browse the repository at this point in the history
  • Loading branch information
ChiaXinLiang authored Sep 28, 2024
2 parents fc5db56 + bdd2443 commit 27aafec
Show file tree
Hide file tree
Showing 10 changed files with 1,327 additions and 334 deletions.
Binary file added MLLM_latex/.DS_Store
Binary file not shown.
12 changes: 5 additions & 7 deletions MLLM_latex/chapter10/chapter10.tex
Original file line number Diff line number Diff line change
@@ -1,12 +1,9 @@



\chapter{Ethical Considerations and Responsible AI}

As Multimodal Large Language Models (MLLMs) continue to advance and shape the AI landscape, capable of processing and generating content across various modalities such as text, images, and audio, it is crucial to address the ethical implications and challenges that arise from their development and deployment to ensure responsible AI practices\cite{konidena2024ethical}.


One of the primary concerns in MLLM development is bias mitigation. These models, trained on vast amounts of data from diverse sources, can inadvertently perpetuate or amplify existing societal biases\cite{peng2024securing}. To combat this, researchers and developers must implement comprehensive bias mitigation strategies\cite{zhang2023mitigating}. These include ensuring diverse and representative training datasets, conducting regular bias\cite{boix2022machine} audits across different modalities\cite{pymetrics2022audit}, and developing bias-aware fine-tuning techniques\cite{kim2024domain}. Additionally, interdisciplinary collaboration with experts from fields such as ethics, sociology, and psychology can provide valuable insights into identifying and addressing potential biases\cite{aquino2023practical}.
One of the primary concerns in MLLM is bias mitigation. It refers to systematic errors or unfair preferences in the model's outputs that can reinforce or amplify societal prejudices and stereotypes. These biases can manifest in various forms, including gender, racial, or cultural biases, and they pose ethical challenges in the deployment and use of LLMs across different applications.
\cite{peng2024securing}. To combat this, researchers and developers must implement comprehensive bias mitigation strategies\cite{zhang2023mitigating}. These include ensuring diverse and representative training datasets, conducting regular bias\cite{boix2022machine} audits across different modalities\cite{pymetrics2022audit}, and developing bias-aware fine-tuning techniques\cite{kim2024domain}. Additionally, interdisciplinary collaboration with experts from fields such as ethics, sociology, and psychology can provide valuable insights into identifying and addressing potential biases\cite{aquino2023practical}.

Privacy and data protection present another significant challenge in the realm of MLLMs. As these models process and generate increasingly complex and potentially sensitive information, robust measures must be put in place to protect individual privacy\cite{he2024emerged, friha2024llm}. This includes implementing advanced data anonymization techniques, exploring decentralized training methods like federated learning, and applying differential privacy approaches. Furthermore, clear protocols for obtaining consent and managing data rights must be established to ensure ethical handling of personal information used in training these models\cite{mccoy2023ethical}.

Expand All @@ -22,7 +19,7 @@ \chapter{Ethical Considerations and Responsible AI}

\section{Bias Mitigation Strategies}

One of the most pressing ethical concerns surrounding MLLMs is indeed the presence of biases in both the training data and the resulting model outputs. This issue is complex and multifaceted, requiring a comprehensive approach to address effectively. Let's explore this topic in more depth, examining the nature of these biases, their potential impacts, and strategies for mitigation.
One of the most pressing ethical concerns surrounding MLLMs is the presence of biases in both the training data and the resulting model outputs. This issue is complex and multifaceted, requiring a comprehensive approach to address effectively. Let's explore this topic in more depth, examining the nature of these biases, their potential impacts, and strategies for mitigation.

Biases in MLLMs can manifest in various ways, often reflecting and amplifying existing societal prejudices. These biases may be related to race, gender, age, socioeconomic status, cultural background, or other demographic factors. For instance, an MLLM might generate images that reinforce gender stereotypes or produce text that uses racially insensitive language. In multimodal systems, these biases can be particularly insidious as they may appear across different modalities, creating a compounded effect.

Expand Down Expand Up @@ -165,5 +162,6 @@ \section{Conclusion}

As MLLMs continue to evolve, ongoing collaboration between researchers, developers, policymakers, and the public will be essential to ensure these powerful tools are used for the betterment of society. By proactively addressing ethical concerns, fostering transparency, and upholding principles of fairness and accountability, we can harness the potential of MLLMs to create a future where AI serves as a force for good, empowering individuals, communities, and societies across the globe.

\printbibliography
\bibliographystyle{plain}
\bibliography{chapter10/reference}

103 changes: 103 additions & 0 deletions MLLM_latex/chapter2/chap2_ref.bib
Original file line number Diff line number Diff line change
@@ -0,0 +1,103 @@
@article{vaswani2017attention,
title={Attention is all you need},
author={Vaswani, A},
journal={Advances in Neural Information Processing Systems},
year={2017}
}

@inproceedings{polanyi2004rule,
title={A rule based approach to discourse parsing},
author={Polanyi, Livia and Culy, Chris and Van Den Berg, Martin and Thione, Gian Lorenzo and Ahn, David},
booktitle={Proceedings of the 5th SIGdial Workshop on Discourse and Dialogue at HLT-NAACL 2004},
pages={108--117},
year={2004}
}

@article{abraham2005rule,
title={Rule-Based expert systems},
author={Abraham, Ajith},
journal={Handbook of measuring system design},
year={2005},
publisher={John Wiley \& Sons, Ltd Chichester, UK}
}

@article{koehn2009statistical,
title={Statistical machine translation},
author={Koehn, Philipp},
journal={Cambridge UP},
year={2009}
}

@article{charniak1997statistical,
title={Statistical techniques for natural language parsing},
author={Charniak, Eugene},
journal={AI magazine},
volume={18},
number={4},
pages={33--33},
year={1997}
}

@article{mi2016supervised,
title={Supervised attentions for neural machine translation},
author={Mi, Haitao and Wang, Zhiguo and Ittycheriah, Abe},
journal={arXiv preprint arXiv:1608.00112},
year={2016}
}

@article{conneau2017supervised,
title={Supervised learning of universal sentence representations from natural language inference data},
author={Conneau, Alexis and Kiela, Douwe and Schwenk, Holger and Barrault, Lo{\"\i}c and Bordes, Antoine},
journal={arXiv preprint arXiv:1705.02364},
year={2017}
}

@article{lauriola2022introduction,
title={An introduction to deep learning in natural language processing: Models, techniques, and tools},
author={Lauriola, Ivano and Lavelli, Alberto and Aiolli, Fabio},
journal={Neurocomputing},
volume={470},
pages={443--456},
year={2022},
publisher={Elsevier}
}

@article{peng2022survey,
title={A survey on deep learning for textual emotion analysis in social networks},
author={Peng, Sancheng and Cao, Lihong and Zhou, Yongmei and Ouyang, Zhouhao and Yang, Aimin and Li, Xinguang and Jia, Weijia and Yu, Shui},
journal={Digital Communications and Networks},
volume={8},
number={5},
pages={745--762},
year={2022},
publisher={Elsevier}
}

@article{henderson2020unstoppable,
title={The unstoppable rise of computational linguistics in deep learning},
author={Henderson, James},
journal={arXiv preprint arXiv:2005.06420},
year={2020}
}

@article{mikolov2013efficient,
title={Efficient estimation of word representations in vector space},
author={Mikolov, Tomas},
journal={arXiv preprint arXiv:1301.3781},
year={2013}
}

@inproceedings{pennington2014glove,
title={Glove: Global vectors for word representation},
author={Pennington, Jeffrey and Socher, Richard and Manning, Christopher D},
booktitle={Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)},
pages={1532--1543},
year={2014}
}

@article{hochreiter1997long,
title={Long Short-term Memory},
author={Hochreiter, S},
journal={Neural Computation MIT-Press},
year={1997}
}
Loading

0 comments on commit 27aafec

Please sign in to comment.