You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
... for when the annotation provides some feature or functionality to the target resource(s), either directly or by using the body resource(s).
For example, a client would benefit from knowing that an annotation provides captions to an AV canvas (or multi-media scene) via the body VTT resource.
This would be parallel to the accessibility property already available on body and target resources, which conveys that the resource has the particular feature already. For example, a video with burned in captions has the accessibility feature of openCaptions.
I think this would be a good solution for the use cases identified by the AV Annotations Motivations TSG.
The schema.org vocabulary covers the use cases related with open captions, audio descriptions and transcriptions. The only case that is missing is subtitles.
Comparing with the alternative solution of using the "behavior" property with values "overlay" and "sidebar", this proposal makes it more explicit what kind of functionality is provided by the annotations.
This was discussed in the AV Community Call on 9/9/24. Overall, lots of positive feedback on this suggestion and much preferred over a behavior property with values overlay and sidebar.
Some questions came up:
Will the available vocabulary include everything in the accessibilityFeature property (section 4.2) or just a defined set that matches the AV Annotations TSG use cases? There is support and interest in allowing broader use of this vocabulary, but also concern that 4.2.1 (Structure and Navigation Terms) could cause confusion with how manifest producers are supposed to implement things like table of contents.
Would this property be available for non-AV usage? Initially interest in wondering how things like 'transcripts' might be applied to manuscripts, etc. for accessibility purposes.
There may be a need to add new vocabulary as we continue to use AI to develop new kinds of structured annotations. Presumably this could be done within IIIF itself, rather than with schema.org? We already have a potential new vocabulary option: subtitles
A non-AV use case came to my mind after the discussion in the AV Community Call on 9/9/24.
It is the case of supplementing annotations on a canvas (with an image), where the annotation contain the text from OCR with position information on the image. These annotations are not meant to be shown to the user, they are intended be used to highlight words on the image after a user search via IIIF Content Search.
A value for the provides property to inform the IIIF client that such annotation are meant to provide this functionality could also be considered.
... for when the annotation provides some feature or functionality to the target resource(s), either directly or by using the body resource(s).
For example, a client would benefit from knowing that an annotation provides captions to an AV canvas (or multi-media scene) via the body VTT resource.
This would be parallel to the
accessibility
property already available on body and target resources, which conveys that the resource has the particular feature already. For example, a video with burned in captions has theaccessibility
feature ofopenCaptions
.The initial list of features for
provides
would be from the a11y vocab list, to mirror the existing property: https://www.w3.org/community/reports/a11y-discov-vocab/CG-FINAL-vocabulary-20230718/#accessibilityFeature-vocabularyHowever in the future we could add our own entries to cover new use cases.
An example annotation:
( /cc @nfreire @glenrobson)
The text was updated successfully, but these errors were encountered: