27972 DAL-ART: Deep Active Learning Tool for Detecting Features of Interest in Images with Application to Digital Painting Analysis


Deep learning methods typically require huge amounts of annotated data to optimize their massive parameters for a given task and to enable robust performance and reasonable generalization. In the current big data era with enormous surge of visual content across the internet we do have massive amounts of data for many computer vision tasks, which also led to their flourishing. In many domains, however, the access to annotated data is rather limited, especially when this annotation requires tedious manual work from domain experts as is typically the case in medical imaging, where trained clinicians need to manually annotate pathologies of interest, but also in remote sensing and art investigation among others. Since recently there is increasing interest in deep active learning. 

Active learning in general aims at maximizing the performance of the model by making use of the fewer samples by actively engaging feedback (typically from a human in the loop). This way the learning process is being iteratively improved where each time a user gives relatively few extra annotations to correct the wrong predictions of the model or instances where the model was uncertain. This way the model can eventually be well trained on fewer annotated data and also in a way that is more trustful for users in critical domains –domain expert influences more directly the learning process.

This domain is currently emerging under the name Deep Active Learning (DAL). In this thesis, we want to develop an efficient DAL method and tool for detecting features of interest in digital images. In particular, we focus on an application in digital painting analysis – detecting cracks in master paintings. This is one of the challenging domains where indeed annotated data are scarce and where active learning is very much on demand. 

Moreover, this thesis fits in an emerging domain AI & Arts, where the research group GAIM is very active (e.g., in AI&Arts webinars at the Alan Turing Institute, international AI&Art webinars, and VAIA's AI in Cultural Heritage).



In this thesis we shall start from deep learning models that were developed at GAIM for crack detection in paintings and build a deep active learning framework on top of these.
Important challenges here will be in building an efficient DAL framework that allows to save the currently trained model and to retrain it efficiently with newly added annotations. Another challenge is in building the actual tool that allows reading and displaying large images, adding new annotations and passing them to the deep learning model. The concrete tasks will be:



In the beginning phase, the initial training set of annotated painting cracks will need to be extended with more annotations, which can be done using some free image editing software like gymp. After the initial training of the model, active learning should enable gradual improvement.

This work will include interaction with art scholars and art restorers from KIK/IRPA in terms of the development of the interface and testing the active learning tool.
If successfully developed, the tool can become important for art restorers in museums and art scholars in general and relevant (among others) for the next phase of the restoration of the Ghent Altarpiece. Moreover, the principles of the developed method will be applicable widely in various image processing and computer vision tasks, especially in the medical imaging domain.

[1] P. Ren et al., “A Survey of Deep Active Learning,” ArXiv 2021 https://arxiv.org/pdf/2009.00236.pdf
[2] GitHub: deep-active-learning https://github.com/ej0cl6/deep-active-learning
[3] R. Sizyakin, B. Cornelis, L. Meeus, H. Dubois, M. Martens, V. Voronin, Aleksandra Pizurica, “Crack Detection in Paintings Using Convolutional Neural Networks,” IEEE Access, vol. 8, pp. 74535-74552, 2020. https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9072114
[4] A. Pizurica, “Het Lam in nullen en enen: Digitaal speuren naar degradaties,” EOS Wetenschap 2020 https://telin.ugent.be/~sanja/Papers/LamGods/078-081%20VERFANALYSE%20VANEYCKSP.pdf
[5] A. Pizurica et al., “Digital Image Processing of The Ghent Altarpiece,” IEEE Signal Processing Magazine, vol. 32, no. 4, pp. 112-122, July 2015 https://telin.ugent.be/~sanja/Papers/LamGods/IEEE-SPM-2015