A bunch of shell scripts to ease working with arche locally
on default you can execute docker related commands only with sudo
; to change this you have a look at https://docs.docker.com/engine/install/linux-postinstall/#manage-docker-as-a-non-root-user (TLDR, create a docker
group and add you user to it; restart the system)
- Clone the repo
- One time: run
./init-repo.sh
to create needed directories- answer
VOLUMES_DIR env var is not set, please provide the installation directory:
with.
- answer
- To (re)start the continer run
./docker-start.sh
Be aware full repository initialization at the first run (particularly external vocabularies import) may take a lot of time. Anyway once# Running 40-updateVocabularies.php
initialization stage is reached you may safely start using the repository without further waiting.- This repo ships with a database dump to skip avoid initial download of external vocabulaires
- To stop the container run
docker container stop acdh-repo
- To remove all data run
./reset-repo.sh
- To remove everything just delete the current folder
docker exec -it -u www-data acdh-repo bash
(or run ./enter-repo.sh
)
- inside the container (see above), look around
ls
- to inspect initscript logs run
tail -f log/initScripts.log
you should mainly focus on ARCHEs API, but you can find the ARCHE-GUI at http://localhost/browser/
-
either install all needed things; see information about the needed PHP setup to run ingestions and file checks can be found here
-
or user docker
for the following steps make sure you are in the testing
directory (cd testing
)
- first create and enter a php/arche container by running
./enter_php_container.sh
- you are now in the mounted
testing
directory (which inside the docker is calleddata
); if you look around (ls
) you should see the same files as in the host`s testing repo. - run
./metadata_ingest.sh
- run
./metadata_ingest.sh
to ingest the "Die eierlegende Wollmilchsau" - see arche-ingest for some documentation
- first create and enter a php/arche container by running
./enter_php_container.sh
- you are now in the mounted
testing
directory (which inside the docker is calleddata
); if you look around (ls
) you should see the same files as in the host`s testing repo. - run
./filechecker.sh
- check the results in
testing/fc_out/{datetime-of-last-run}
(e.g. cd into the directory, start python dev serverpython -m http.server
and open the printed URL) - spoileralert: 2/3 files did not pass the test!
- run
./filechecker.sh
- check the results in
testing/fc_out/{datetime-of-last-run}
(e.g. cd into the directory, start python dev serverpython -m http.server
and open the printed URL) - spoileralert: 2/3 files did not pass the test!
- first create and enter a php/arche container by running
./enter_php_container.sh
- you are now in the mounted
testing
directory (which inside the docker is calleddata
); if you look around (ls
) you should see the same files as in the host`s testing repo. - run
./binaries_import.sh
- (well you actually shouldn't do that, because they didn't pass the filechecker)
- run
./binaries_import.sh
- see repo-file-checker for some documentation
- remove everything and clone the repo again
- comment
# cp dump.sql -d ${VOLUMES_DIR}/data
ininit-repo.sh
- enter container
./enter-repo.sh
- change user
su www-data
- create dump
pg_dumpall -f data/dump.sql
- leave container
- copy data.sql into repo root