• Workshop description:

    The bottleneck of modern computational approaches to visual image analysis is the inability of many techniques to independently detect when they are making incorrect or overconfident predictions. This issue, which falls under the general heading of Reliable AI, has been addressed by various approaches that either model the uncertainties that pollute the problem (e.g., Bayesian Neural Networks, Deep Gaussian Processes, etc.), or provide tools to gain insight into the model’s decision-making process (e.g., GradCam, explainable AI, prototypical networks, etc.}).


    Although many of these techniques are Bayesian, others of a non-probabilistic nature have recently emerged because of their intuitiveness or ease of use, such as topological uncertainty.


    Uncertainty-aware techniques allow users not only to classify the prediction as reliable or unreliable (require careful handling and/or human intervention), but also to detect the so-called out-of-distribution elements (i.e., inputs that do not belong to the estimated distribution of the training set and thus provide unrealistic predictions).


    Following the importance and actuality of uncertainty-aware techniques, the workshop will focus on recent advances and modern applications of reliable AI in image analysis and high-stakes domains.



Workshop Topics:

  • Reliable AI in visual tasks:
    The reliability of ML models is often conveyed through the use of a variety of additional techniques that allow the prediction to be associated with an estimate of confidence and robustness, or provide insight into the mechanism governing the model’s decision process. These estimates, when properly integrated, expand the possibilities for use in high-stakes domains (such as autonomous driving or biomedical signal analysis) in which low margins of error are acceptable.
  • Out-of-distribution detection:
    ML and DL models are typically unable to detect predictions made outside the training parameters. Detecting elements out-of-(training)-distribution is therefore a critical tool in high-stakes applications to increase confidence in ML models and to determine when human intervention is required.
  • Uncertainty quantification methods and applications:
    Many modern DL techniques are based on classical uncertainty quantification methods that aim to integrate uncertainties directly into the model (intrusive methods) or to produce probabilistic/non-probabilistic estimates of model reliability. Many classical techniques have already been adapted to novel image analysis DL frameworks (such as Deep Gaussian Processes), but the modern literature still lacks a modern counterpart, opening the way to interesting research topics.
  • Efficient Reliability Estimates:
    The computational cost and implementation difficulties of many uncertainty-aware AI techniques make their use difficult. Particularly in visual analysis, the high computational cost makes the use of post-hoc and surrogate techniques, which require reduced implementation effort and allow efficient computation of these scores, an increasingly relevant topic.
  • Open source reliability software: Modern techniques for reliability estimation and out-of-distribution detection, particularly for image analysis, are not available as easy-to-use open source software, limiting the potential scientific output. Contributions that provide software that is easily applicable and transferable to other application domains are particularly welcome.
  • Learning to defer (L2D) in imaging:
    One of the main applications of reliable AI is the ability to correctly identify the unreliable results and defer the choice to a secondary model or to human intervention. Both Bayesian approaches and non probabilistic methods share the ability to provide multiple alternatives to the experimenter and can be adapted to different image processing domains.
  • Application to biomedical imaging:
    Among the high-stakes domains, biomedical imaging requires models capable of correctly detecting OOD elements and identifying those cases that require medical intervention.
  • Semantic Segmentation Reliability:
    Uncertainty-aware methods can also be useful for segmentation tasks with blurred and/or low-contrast contours. In addition, many approaches can integrate multiple labels to understand a segmentation that incorporates a reliability score for the more complex zones.
  • Object Detection Under Uncertainty:
    In many applications of image processing for anomaly detection, it is critical that the detection report only occurs when a certain level of confidence has been reached. Examples of this include the use of machine learning methods for video surveillance, where human intervention is only required when the method is actually confident of what has been detected.
  • Uncertainty in Domain Adaptation and Generalization:
    When a network is used outside the domains for which it was originally trained, as in the case of domain adaptation, the presence of reliability scores makes it possible to assess the extent to which the domains are different and whether any fine-tuning carried out has produced satisfactory results.
  • Adversarial Robustness:
    One of the derivatives of reliable AI is the ability to withstand data perturbations, or at least to identify when minimal changes can alter the outcome in unpredictable ways. This applies in particular to the use of adversarial methods, which aim to identify these network vulnerabilities in order to provide more stable solutions instead.
  • Variational approaches to handle Imbalanced Datasets:
    Variational and uncertainty-aware methods offer several strategies for dealing with small and/or unbalanced data sets. This is particularly relevant for certain application domains (such as medical imaging) that are characterized by a high degree of imbalance and heterogeneity.