Tools to setup an ElasticSearch instance fed with subsets of Wikidata, to answer questions like "give me all the humans with a name starting by xxx" in a super snappy way, typically for the needs of an autocomplete field.
For the version tailored to inventaire's needs, see the #integrateInvEntities
branch
see setup to install dependencies:
- NodeJs
- ElasticSearch
- Nginx
- Let's Encrypt
- already installed in any good nix system: curl, gzip
see Wikidata per-entity import
2 ways to import entities data into your ElasticSearch instance
- Wikidata filtered-dump import
- Wikidata batch import using SPARQL queries results
- Wikidata per-entity import
To un-index entities that were mistakenly added, pass the path of a results json file, supposedly made of an array of ids. All those ids' documents will be deleted
npm run delete-from-results ./queries/results/mistakenly_added_ids.json
curl "http://localhost:9200/wikidata/humans/_search?q=Victor%20Hugo"
We are developing and maintaining tools to work with Wikidata from NodeJS, the browser, or simply the command line, with quality and ease of use at heart. Any donation will be interpreted as a "please keep going, your work is very much needed and awesome. PS: love". Donate
- wikidata-sdk: a javascript tool suite to query and work with wikidata data, heavily used by wikidata-cli
- wikidata-edit: Edit Wikidata from NodeJS
- wikidata-cli: The command-line interface to Wikidata
- wikidata-filter: A command-line tool to filter a Wikidata dump by claim
- wikidata-taxonomy: Command-line tool to extract taxonomies from Wikidata
- Other Wikidata external tools:
Do you know inventaire.io? It's a web app to share books with your friends, built on top of Wikidata! And its libre software too.