Transformation:  Embracing Change – Digital & Multimedia Sciences Section November 2015


Disclaimer: The views and opinions expressed in the articles contained in the Academy News are those of the identified authors and do not necessarily reflect the official policy or position of the Academy.

Objectivity, Cognitive Bias, and Multimedia Forensics

Source:  Jeff M. Smith, MS

Cognitive bias is the brain’s influence on the decision-making process leading to error. This fascinating field of research looks at how people react in specific situations and gives names to their deviations in judgment, perceptual distortions, and illogical interpretations. “Contrast effect,” “clustering illusion,” and “confirmation bias” are some names of cognitive bias types where humans will lead themselves to wrong conclusions. The forensic science community is well aware of the presence of cognitive bias in laboratory practice and examination, especially since findings shared in the 2009 National Academy of Sciences (NAS) Report recommended research into human observer bias and sources of human error.1 One researcher in particular, Dr. Itiel Dror (University College London and Cognitive Consultants International), focuses on experts and, in particular, forensic experts. A paper of his very much worth reading that describes cognitive issues in forensic practice while suggesting solutions appears in Forensic Science Policy & Management.2 One great example of the principles discussed in this paper that extends into practice is the need to assess the evidence first rather than moving from suspect to evidence. This principle is crucial in avoiding the cognitive disposition one would have in finding ways that the evidence matches the suspect instead of the other way around.

There is, of course, no exception to the need for enhancing objectivity and mitigating cognitive bias in multimedia forensics: the analysis of digital audio, video, and image evidence. There are three particular tasks of interest that fall under this discipline where cognitive bias should be recognized and mitigated. These include audio transcription, multimedia authentication, and biometric comparisons.

Transcription of recorded audio entails typing up a text document that portrays a recorded exchange of dialogue. It is a crucial task in the legal system that helps expedite proceedings and is typically done by a transcriptionist (e.g., in the preparation of deposition transcripts). Often forensic audio examiners are asked to prepare transcripts of difficult-to-understand forensic audio recordings. This is ultimately a subjective process with an outcome subject to error that cannot be measured. Furthermore, the resulting transcript can end up influencing a judge’s or jury’s interpretation of the substance of a recording. In these cases, it is extremely important to inform the trier of fact of these limitations and that a transcript is an opinion, not certifiable truth.

Multimedia authentication is the process of substantiating the veracity and/or provenance of recorded evidence. Essentially “who made this recording and did they edit it to change its meaning?” While the methods used in this process can produce extremely confident results, it is a task that requires the provision of an expert opinion. To this end, cognitive bias plays a smaller role but must be recognized nonetheless. Several measures must be taken, including: validating tools by testing them extensively on known data, cross-verifying results using multiple analyses, and engaging in research and continually seeking out training opportunities in order to maintain and update technical skills.

The last area of discussion, biometric comparison, is quite possibly the most important. Attributing evidence to identity is a serious matter and the crux of forensic science. Consider the comparison of faces, voices, and other human traces that are present on audio/video recordings. Computational state of the art reaches its apex in feature extraction, comparison, and statistically derived decision-making. Here, a computer can objectively execute tasks that would otherwise be carried out within the error-prone human cognitive architecture. While face comparison is a common human task as a holistic review conducted by border patrols and routine policing, quality must be quite high for a computer to give reliable results; not typically the case in forensic video. However, forensic speaker comparison is a common request and can be carried out by drawing on a research area rich in development. Current state of the art relies on well-established thresholds for the quality and quantity of data present in evidence voice recordings. Comparisons are made not only between features extracted from evidence and suspect recordings but also to a representative population of recorded voices so that the resulting score can be given in a Bayesian likelihood of the two voices coming from the same speaker with respect to the population of speakers. This framework is crucial in diminishing base-rate error and in respecting the principle of individuality where similarities or differences are not within the context of comparing one voice to another, but establishing true uniqueness by comparing many voices at once.

As technology and the ways in which humans create digital and multimedia evidence evolves, so will our analysis techniques. A complete understanding of objectivity and cognitive bias factors will continue to play crucial roles in lab management and practices. The forensic science community will continue to draw on the following ways that cognitive bias and error can be mitigated: using validated tools, applying standardized methods, implementing peer review, relying on computational methods wherever possible, moving from evidence to suspect and not the other way around, employing Bayesian statistics, and avoiding 1:1 comparisons. But most importantly, forensic experts must acknowledge the limits of their methods and strive to propel the state of the art while helping the trier of fact understand analysis limits and potential error.

References:

  1. Committee on Identifying the Needs of the Forensic Sciences Community, National Research Council (U.S.), Strengthening Forensic Science in the United States: A Path Forward. The National Academies Press (2009)
  2. Dror, I. E. Practical Solutions to Cognitive and Human Factor Challenges in Forensic Science. Forensic Science Polity and Management 4(3-4):1-9 (2013)

 

Jeff M. Smith, MS, is the Associate Director of the National Center for Media Forensics at the University of Colorado Denver where his students that have graduated from its Master’s program work in forensic labs at all levels of government and the private sector. He is a Fellow of the American Academy of Forensic Sciences (AAFS), Chair of the Audio Engineering Society’s Technical Committee on Forensic Audio, a member of International Association of Identification member (IAI), and a member-at-large of the Scientific Working Group on Digital Evidence (SWGDE) Executive Committee and Audio Committee. When he is not examining forensic digital multimedia, he is enjoying time with his family camping, biking, and snowboarding in the Colorado Rockies.