The Reasoner is a monthly digest highlighting exciting new research on reasoning, inference and method broadly construed. It is interdisciplinary, covering research in, e.g., philosophy, logic, AI, statistics, cognitive science, law, psychology, mathematics and the sciences. Each month, there is a column on Evidence-Based Medicine. Here is this month’s column:
Last month a paper on the role of pigeons as trainable observers of pathology and radiology breast cancer images was published in PLOS ONE. Among other things, the authors of the paper, Richard M. Levenson , Elizabeth A. Krupinski, Victor M. Navarro, and Edward A. Wasserman, were interested to find out whether pigeons could be trained to discriminate between benign and malignant pathology and radiology images. The objective is not to rely on pigeons for clinical diagnostic support, but rather to promote the use of pigeons as an appropriate animal model for human observers in medical image perception studies. In particular, the constantly updated medical image recognition and display technologies must be validated by sometimes expensive and hard-to-reach trained observers. The authors of this paper suggest that trained pigeons could be used as a cost-effective, tractable, relevant, informative, and statistically interpretable surrogate for human observers in order to help determine the reliability of these new technologies.
The research was in part motivated by other recent studies reporting that pigeons are pretty comparable to humans at discriminating in other areas. For example, studies have reported that pigeons can distinguish the paintings of Monet from Picasso. Also, studies have reported that pigeons can distinguish human male from female faces. The results of this paper are consistent with these findings. After training, the pigeons were able to distinguish benign from malignant human breast histopathology and the presence of microcalcifications on mammogram images but had difficulty evaluating the malignant potential of detected breast masses. The pigeon performance here corresponds closely to human performance. The authors maintain that this ‘indicates that birds may be relatively faithful mimics of the strengths and weaknesses of human capabilities when viewing medical images’.
Granting these results, however, pigeons might still not be very good models for human observers in these areas, since pigeons might be achieving comparable results to humans here but by entirely different means. For example, it seems that the way in which pigeons discriminate human male and female faces is largely texture-based. The authors acknowledge this problem and try to alleviate it by offering some evidence of mechanisms. In particular, they argue that ‘[t]he specific underlying mechanisms of visual learning appear to be similar between avians and primates’ and that ‘the anatomical (neural) pathways that are involved…appear to be functionally equivalent to those in humans’. The authors conclude that ‘on balance, it appears that pigeons’ visual discrimination abilities and underlying neural pathways are sufficiently similar to those of humans when challenged by medical image evaluation tasks as to have potential practical significance’. Because of considerations such as these, this paper seems to highlight quite nicely the role of different sorts of evidence in determining whether a particular animal model is appropriate for a particular task. In order to argue that the pigeon is an appropriate model for human observers here, the authors provide both evidence that the pigeon performs similarly to humans in the relevant observation tasks and evidence that this similar performance is attributable to similar underlying mechanisms.