Skip to content

Repository for the data in the paper "Explain Me the Painting: Multi-TopicKnowledgeable Art Description Generation".

Notifications You must be signed in to change notification settings

noagarcia/explain-paintings

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 

Repository files navigation

Art Description Generation for Paintings

This repository is for the annotated data in the paper Explain Me the Painting: Multi-TopicKnowledgeable Art Description Generatio, published at ICCV 2021.

The code for the model introduced in the paper can be found in this other repository.

Data

Art descriptions can be classified into three main topics [1]:

  • Form: describes how the artwork looks.
  • Content: describes what the artwork is about.
  • Context: describes in what circumstances the work was done.

In order to study art description generation, we annotated a subset of the SemArt dataset [2], associating each sentence in a description to one of the above topics.

In total, we annotated 33,543 sentences from 17,249 images.

We release this data to the public in order to promote research on machine learning for art.

Annotations can be found in annotations/ directory in json format. Data format is as follows:

annotations[{
"img" : str, 
"description" : str
"content" : [str], 
"form" : [str], 
"context" : [str], 
}]

where img is the filename of the image in the SemArt dataset, description is the original description, and content/form/context are lists with the sentences annotated to each topic, respectively. If an image doesn't have any sentences for a certain topic, the list for that topic is empty.

Code

In the paper, we introduced a model to generate multi-topic knowledgeable description from paintings. The code for this model can be found in this repository.

Maintenance

If you have questions about the data in this repository, please contact Noa Garcia.

References

[1] Robert Belton. Art history: A preliminary handbook. British Columbia: University of British Columbia, 1996.

[2] Noa Garcia and George Vogiatzis. How to read paintings:Semantic art understanding with multi-modal retrieval. In Proc. ECCV Workshops, 2018.

Citation

If you find the data in this repository useful, please cite our paper:

@InProceedings{bai2021explain,
   author    = {Zechen Bai and Yuta Nakashima and Noa Garcia},
   title     = {Explain Me the Painting: Multi-Topic Knowledgeable Art Description Generation},
   booktitle = {International Conference in Computer Vision},
   year      = {2021},
}

About

Repository for the data in the paper "Explain Me the Painting: Multi-TopicKnowledgeable Art Description Generation".

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published