This is an example of using Akka Streams to:
- Backup MongoDB collection to AWS S3.
- Restore it from AWS S3 back to MongoDB.
This example contains a full runnable code presented in a two-part article:
- Crafting production-ready Backup as a Service solution using Akka Streams
- Crafting production-ready Backup as a Service solution using Akka Streams: part 2
- Akka Streams
- Alpakka S3 connector
- MongoDB Reactive Streams driver
The scenario located in the Main
is the following:
- Perform a backup to S3.
- Drop the collection.
- Perform a restore.
This project uses Minio, a fully compatible S3 object storage, to replace Amazon S3. It's being ran locally in a docker-compose along with MongoDB. That's why you'll be able to run this example in less than a minute!
Steps to run the example:
-
Create dirs
mkdir -p /tmp/data/mybucket /tmp/config
-
Run Minio and MongoDB
docker-compose up -d
-
Run Mongo shell
docker run -it --net host --rm mongo sh -c 'exec mongo "localhost:27017"'
-
Insert a document
> use CookieDB switched to db CookieDB > db.cookies.insert({"name" : "cookie1", "delicious" : true}) WriteResult({ "nInserted" : 1 })
-
sbt run
In the default scenario, the collection is being backed up to Minio, removed from Mongo and restored. You can go to http://127.0.0.1:9000/minio/mybucket/ (login:
minio_access_key
, password:minio_secret_key
) and see the backup file (backup.json
). -
Clean up afterwards
docker-compose down
The following snippet presents a basic set of MongoDB commands useful when playing with backup/restore streams.
> show dbs
admin 0.000GB
local 0.000GB
> use CookieDB
switched to db CookieDB
> db.cookies.insert({"name" : "cookie1", "delicious" : true})
WriteResult({ "nInserted" : 1 })
> show collections
cookies
> db.cookies.find()
{ "_id" : ObjectId("599b0d9a266a67c9516e0245"), "name" : "cookie1", "delicious" : true }
> db.dropDatabase()
{ "dropped" : "CookieDB", "ok" : 1 }