Script to process CSVs into an Sinai-ready solr index.
For basic use, you can install feed_ursus as a systemwide command directly from github, without having to first clone the repository.
We recommend installing with pipx. On MacOS, you can install pipx (and python!) with homebrew:
brew install pipx pyenv
pipx ensurepath
Then:
pipx install git+https://github.com/uclalibrary/feed_ursus.git
Pipx will install feed_ursus in its own virtualenv, but make the command accessible from anywhere so you don't need to active the virtualenv yourself.
Convert a csv into a json document that follows the data model of an Ursus solr index:
feed_ursus [path/to/your.csv]
This repo includes a docker-compose.yml file that will run local instances of solr and ursus for use in testing this script. To use them, first install docker and docker compose. Then run:
docker-compose up --detach
docker-compose run web bundle exec rails db:setup
It might take a minute or so for solr to get up and running, at which point you should be able to see your new site at http://localhost:3003. Ursus will be empty, because you haven't loaded any data yet.
To load data from a csv:
feed_ursus [path/to/your.csv] --solr_url=http://localhost:8983/solr/californica --mapping=dlp
Different metadata mappings are included for general Digital Library use (--mapping=dlp
) and for the Sinai Manuscripts Digital Library (--mapping=sinai
). Because this script was originally used for the Sinai Manuscripts project, the default value is sinai
for backwards compatibility.
For development, clone the repository and use poetry to set up the virtualenv:
git clone git@github.com:UCLALibrary/feed_ursus.git
cd feed_ursus
pipx install poetry
poetry install
Then, to activate the virtualenv:
poetry shell
The following will assume the virtualenv is active. You could also run e.g. poetry run feed_ursus [path/to/your.csv]
feed_ursus [path/to/your.csv] --solr_url http://localhost:8983/solr/californica
Tests are written for pytest:
pytest
black (formatter) will run in check mode in ci, so make sure you run it before committing:
black .
flake8 (linter) isn't currently running in ci, but should be put back in soon:
flake8
pylint (linter) isn't currently running in ci, but should be put back in soon:
pylint
mypy (static type checker) isn't currently running in ci, but should be put back in soon:
mypy
When importing a work, the script will always assume that a IIIF manifest exists at https://iiif.library.ucla.edu/[ark]/manifest, where [ark] is the URL-encoded Archival Resource Key of the work. This link should work, as long as a manifest has been pushed to that location by importing the work into Fester or Californica. If you haven't done one of those, obviously, the link will fail and the image won't be visible, but metadata will import and be visible. A manifest can then be created and pushed to the expected location without re-running feed_ursus.py.