This is an alpha release.
- The
AlmaBatch
object can work with either a pandasDataFrame
or a Python list of dictionaries. For each row in the data, it makes an asynchronous API call, using the columns or keys as parameters. Calls can be executed in batches (e.g, 500 at a time) or as one single batch. Errors are caught and reported for the relevant rows, and API output can be serialized and saved locally. - The current release handles only HTTP calls without a payload, but I plan to build in support for
POST
andPUT
ops with payloads in a future release.
- Clone this repo.
- Create a new environment (preferably using Python 3.8+).
python -m venv ENV
source ENV/bin/activate
- Install the requirements.
pip install -r requirements.txt
- Edit
_config.yml_
with the following information:- The URL for the OpenAPI specs (in JSON) for the particular API endpoint you're using (from the Ex Libris Dev Center documentation).
- A valid Ex Libris API key.
- The base URL for your Alma API endpoint.
- The particular endpoint you are using (from the Ex Libris Dev Center documentation).
- The name of an output file (in CSV format) where AlmaBatch will report the results of each call (including the path to it on your local machine).
- An optional path to a folder (on your local machine) for serializing the results from the API calls.
- Call
AlmaBatch
from a separate script or Jupyter Notebook (as shown in alma_batch_examples.ipynb). The notebook contains a worked example for retrieving items from an Alma Physical Item set and scanning those items in. It also contains an example for scanning in items from a CSV file (such as output by an Alma Analytics report).