In this work, a LoRA-optimized BLIP model is used to investigate the improvement of Medical Visual Question Answering (VQA). Our work aims to optimize the BLIP architecture through the use of Low-Rank Adaptation (LoRA) to develop a more effective and resource-efficient method for medical image analysis. We rigorously test this technique on a specialized combination of medical VQA datasets and show its efficacy. The outcomes demonstrate notable gains in accuracy, especially for closed-type questions, highlighting the potential of LoRA-enhanced BLIP models to advance AI-driven healthcare solutions and medical diagnostics. This paper lays the groundwork for future research and development in the field by presenting a novel approach that connects cutting-edge AI techniques with essential medical applications.
-
Notifications
You must be signed in to change notification settings - Fork 0
dinesh-kumar-mr/MediVQA
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
Part of our final year project work involving complex NLP tasks along with experimentation on various datasets and different LLMs
Topics
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published