Draw Me Like My Triples: Leveraging Generative AI for Wikidata Image Completion
Abstract
Humans are critical for the creation and maintenance of high-quality Knowledge Graphs (KGs). However,
creating and maintaining large KGs only with humans does not scale, especially for contributions based on
multimedia (e.g. images) that are hard to find and reuse on the Web and expensive to generate by humans
from scratch. Therefore, we leverage generative AI for the task of creating images for Wikidata items
that do not have them. Our approach uses knowledge contained in Wikidata triples of items describing
fictional characters and uses the fine-tuned T5 model based on the WDV dataset to generate natural
text descriptions of items about fictional characters with missing images. We use those natural text
descriptions as prompts for a transformer-based text-to-image model, Stable Diffusion v2.1, to generate
plausible candidate images for Wikidata image completion. We design and implement quantitative and
qualitative approaches to evaluate the plausibility of our methods, which include conducting a survey to
assess the quality of the generated images.
Fichier principal
Draw_Me_Like_My_Triples___Wikidata_Workshop_CameraReady.pdf (2.03 Mo)
Télécharger le fichier
ISWC_poster_DrawMeLikeMyTriples_me-12.pdf (1.01 Mo)
Télécharger le fichier
Origin | Publisher files allowed on an open archive |
---|---|
licence |
licence |
---|