Artificial Intelligence (A.I.) systems are achieving remarkable performances in radiology, but at the cost of increased complexity. Hence, they become less interpretable.
 
As these systems are pervasively being introduced to radiology, we believe it becomes imperative to develop dedicated methodologies to enhance the interpretability of A.I. technologies.
 
Interpretability methods could help physicians to decide whether they should follow/trust a prediction from an A.I. system. Ultimately, interpretability of A.I. systems is closely linked to safety in healthcare.
 
The following poll is meant to collect radiologists’ opinions about methods to enhance the interpretability of A.I. systems developed to assist radiologists.

We thank you in advance for taking 5 minutes to answer this poll. The results of the poll are going to be made publicly available and part of a related publication where this topic will be discussed.
 
Kind regards
Mauricio Reyes, Prof. Ph.D.
On behalf of the organisers and supporters of iMIMIC Workshop (Interpretability of Machine Intelligence in Medical Image Computing) 

Question Title

* 1. Position

Question Title

* 2. Place of work

Question Title

* 3. Gender

Question Title

* 6. Subspecialty (check max. 3)

Question Title

* 7. How should these methods help radiologists? (Please rank from most to least important)

Question Title

* 8. Years of experience

Question Title

* 9. Do you think interpretability/explainability methods for A.I. systems are a must for the future of radiology and A.I.?

Question Title

* 10. What kind of information would you prefer from interpretability methods? (Please rank from most to least important)

Question Title

* 11. Which aspect of interpretability of A.I. systems is more critical for you at present? (Please rank from most to least important)

Question Title

* 12. On a scale of 1-100, would you feel comfortable using an A.I. system without having an explanation of the system’s outputs? (1: not at all: 100: very)

1 indifferent 100
Clear
i We adjusted the number you entered based on the slider’s scale.

Question Title

* 13. On a scale of 1-100, how much financial benefit do you think an interpretability system for A.I. would bring to your work? (1: no benefit at all; 100: considerable financial benefits)

1 indifferent 100
Clear
i We adjusted the number you entered based on the slider’s scale.

T