South East Asia is home to more than 1,000 native languages. Nevertheless, South-East Asian NLP is underrepresented in the research community, and one of the reasons is the lack of access to public datasets (Aji et al., 2022). To address this issue, we initiate SEACrowd, a joint collaboration to collect NLP datasets for South-East Asian languages. Help us collect and centralize South-East Asian NLP datasets, and be a co-author of our upcoming paper.
You can contribute by proposing unregistered NLP dataset on our approved record and our in-review datasets. Just fill out this form, and we will check and approve your entry if it meets our requirements (see this for the detailed checklist).
We will give contribution points based on several factors, including: supported modality, language scarcity, or task scarcity.
You can also propose datasets from your past work that have not been released to the public. In that case, you must first make your dataset open by uploading it publicly, i.e. via Github or Google Drive.
You can submit multiple entries, and if the total contribution points is already above the threshold, we will include you as a co-author (Generally it is enough to only propose 1-2 datasets). Read the full method of calculating points here.
Note: We are not taking any ownership of the submitted dataset. See FAQ below.
Yes! Aside from new dataset collection, we are also centralizing existing datasets in a single schema that makes it easier for researchers to use Indonesian NLP datasets. You can help us there by building dataset loader. More details about that here.
Alternatively, we're also listing NLP research papers of Indonesian languages where they do not open their dataset yet. We will contact the authors of these papers later to be involved in SEACrowd. More about this is available in our Discord server.
SEACrowd do not make a clone or copy of the submitted dataset. Therefore, the owner of any submitted dataset will remain to the original author. SEACrowd simply build a dataloader, i.e. a file downloader + reader so simplify and standardize the data reading process. We also only collect and centralize metadata of the submitted dataset to be listed in our catalogue for better discoverability of your dataset! Citation to the original data owner is also provided for both SEACrowd and in our catalogue.
The license for a dataset is not always obvious. Here are some strategies to try in your search,
- check for files such as README or LICENSE that may be distributed with the dataset itself
- check the dataset webpage
- check publications that announce the release of the dataset
- check the website of the organization providing the dataset
If no official license is listed anywhere, but you find a webpage that describes general data usage policies for the dataset, you can fall back to providing that URL in the _LICENSE
variable. If you can't find any license information, please note in your PR and put _LICENSE="Unknown"
in your dataset script.
You can upload your dataset publicly first, eg. on Github. If you're an owner of a Private Dataset that is being contacted by SEACrowd Representative for a possibility of opening that dataset, you may visit this Private Dataset FAQ.
If you have an idea to improve or change the code of the seacrowd-datahub
repository, please create an issue
and ask for feedback
before starting any PRs.
Yes, you can ask for helps in SEACrowd's community channel! Please join our Discord server.
We greatly appreciate your help!
The artifacts of this initiative will be described in a forthcoming academic paper targeting a machine learning or NLP audience. Please refer to this section for your contribution rewards in helping South-East Asian NLP. We recognize that some datasets require more effort than others, so please reach out if you have questions. Our goal is to be inclusive with credit!
Our initiative is heavily inspired by NusaCrowd which provides open access data to 100+ Indonesian NLP corpora. You can check NusaCrowd paper on the following link.