Original unfinished project Brevis-News
- BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension: https://arxiv.org/abs/1910.13461
- Created custom serving model based on BART, pre-trained model from huggingface and fine-tuned on inshorts data
- Created Django backend for:
- Article scraping
- Google-News API trending news feed
- Single line docker-compose deployment
- Designing front-end
- Overhaul front-end
- Fine-Tune model on more dataset
- Heroku Deployment
- Trending news auto-summarization
Just run docker compose up and the everything should just work.
- Does not work on windows due to WSL not supporting host mode on the port
docker-compose up
It will take a long time for the first run
git clone https://github.com/vinayak19th/MDD_Project
For information on installing docker, check here
docker pull vinayak1998th/bart_serve:1.0
docker run -d -p 8501:8501 -p 8500:8500 --name bart vinayak1998th/bart_serve:cpu
If you have an NVIDA CUDA supported GPU, you can run the server for GPU runtime
docker run --runtime=nvidia -d -p 8501:8501 -p 8500:8500 --name bart vinayak1998th/bart_serve:gpu
The code for testing after the server is running is in the Serving_Test notebook
Read Contributing.md file in the repository to learn more about making pull requests, and contributing in general.
The project is currently being maintained by :