The Psychology of Biometry
Speaker: Prof. Dr. Odette Wegwarth, Max Planck Institute for Human Development, Berlin; Charité – Universitätsmedizin Berlin
An evidence-based health care requires risk-savvy doctors and patients. Yet, our health care system falls short on both counts. To illustrate the extent: In a national study of 412 U.S. primary care physicians, most physicians did not know that the 5-year survival statistic—often used to communicate the benefit of screening—is a biased metric in the context of screening. Among German gynecologists who were explicitly asked for numerical information on the benefits and harms of mammography screening, not a single one provided all information required for a patient to make an informed choice. Of 32 German HIV counselors, only one were able to correctly explain the meaning of a positive HIV test result, whereas the majority claimed that a positive test result means that one has HIV with certainty. And in a study with nearly 1,700 women from 5 European countries, the majority of women overestimated their likelihood to develop certain female cancers by orders of magnitude, whereas less than one third was aware of the fact that mammography screening can also cause harms. Why do we have this lack of risk literacy? One frequently discussed answer assumes that people suffer from cognitive deficits that make them basically hopeless at dealing with statistics. Yet the fact that even 4th-graders can understand the positive predictive value if information is presented as natural frequencies shows that the problem can lie in how information is presented. Still, a transparent presentation of risk may not eliminate all of the observed misinterpretation of risk in medicine. When, for instance, over 401 US gynecologists were presented with an easy-to-understand summary of the current best evidence on ovarian cancer screening, about 48% did not revise their intial assumptions about the screening’s benefit-harm-ratio, although these assumptions were incorrect. Insights from cognitive sciences imply that people’s perception and dealing with risks does not only depend on how statistics is taught and communicated, but also on how medical risk information is transformed when travelling through social networks, and on how people initially learn about a risk (by description or by experience).
Learning from experience and description: Two imperfect teachers of risk cognition
Speaker: Dr. Dirk Wulff, University of Basel
A myriad of behaviors and cognition can, at least partly, be understood by taking into account that people learn about the risks in the world through two different learning modes: an experienced-based and an description-based mode. In this presentation, we summarize extant findings on how the two modes of learning each in their own way shape people’s risk behavior and perception. For example, it is by now well established that people, who learn about a risk from symbolic descriptions (e.g., the written information on a drug’s side effects as presented in package inserts) tend to make decisions as if small probability outcomes (e.g., a rare side effect) received more impact than they deserved based on their actual probability, whereas people, who learn about a risk from experience (e.g., by repeatedly administering the drug), tend to make decisions as if small probability outcomes received less impact than they deserved (Wulff, Mergenthaler Canseco, & Hertwig, 2018). Thus, depending on how people learn about a risk, they will reach different appraisals of the risk and make different decisions, a phenomenon known today as the description-experience gap. In real-life settings, the two modes of learning often co-occur simultaneously, but sometimes only one of them exists. The extent to which both modes convey convergent or divergent information about risks has, as I will discuss, profound implications of how people perceive risks; for how they behave when facing risks, and, last but not least, also for what can be done to help people to better reckon with risks.
The social dynamics of risk perception
Speaker: Mehdi Moussaid – Max Planck Institute for Human Development, Berlin
Understanding how people form and revise their perception of risk is central to designing efficient risk communication methods, eliciting risk awareness, and avoiding unnecessary anxiety among the public. Yet public responses to hazardous events such as climate change, contagious outbreaks, and terrorist threats are complex and difficult to anticipate. While many psychological factors influencing risk perception have been identified in the past, it remains unclear how perceptions of risk change when information is propagated from one person to another, and what impact the repeated social transmission of perceived risk has at the population scale.
To examine these questions, we analyze how a risk message detailing the benefits and harms of the widely used but controversial antibacterial agent called Triclosan is communicated from one individual to another in experimental communication chains. Communication chains constitute useful tools for studying the dynamics of social contagion phenomena. A chain mimics the spread of a rumor in a social network: A first participant is “seeded” with a risk message detailing the benefits and harms of Triclosan, and instructed to communicate about it to a second, naïve participant in an open discussion. After the discussion, the second participant, in turn, is instructed to communicate about Triclosan to a third individual, and so on until end of the chain. In total, we examine 15 different communication chains, each composed of 10 participants. The discussions between every two subsequent subjects were recorded and analyzed, and the subjects’ risk judgment of Triclosan was assessed before and after the experiment.
Our analyses show that when messages are propagated through the transmission chains, they tend to become shorter, gradually inaccurate, and increasingly dissimilar between chains. In contrast, the perception of risk is propagated with higher fidelity due to participants manipulating messages to fit their preconceptions, thereby influencing the judgments of subsequent participants. Computer simulations implementing this simple influence mechanism show that small judgment biases tend to become more extreme, even when the injected message contradicts preconceived risk judgments. Our results provide quantitative insights into the social amplification of risk perception, and can help policy makers better anticipate and manage the public response to emerging threats.
What statistics instructors need to know about concept acquisition to make statistics stick
Speaker: Dr. Jochen Kruppa, Charité – Universitätsmedizin Berlin
The limits of my language are the limits of my mind. I know only that for which I have words” (Wittgenstein). When we learn something completely new, we first associate a new concept with an existing part of our knowledge. Unfortunately, we can only build new ideas and knowledge on an existing conceptual foundation. In the field of statistics, we often face great confusion when teaching newcomers. Since novices are not even familiar with basic statistical concepts, novices must first place the statistical terms introduced into their personal network of largely non-statistical knowledge. Instructors, on the other hand, who are well versed in statistics, have deeply internalized the content to be taught and the relevant context. For example, a ‘logistic regression’ has nothing to do with ‘shipping goods in tough economic times’. In other cases, these discrepant roles can lead to headaches and frustration, both for learners and their instructors. In this talk, we show how simple statistical concepts are associated in learners’ brains with pre-existing knowledge and how these associations change after completion of an introductory course in applied statistics. We provide a collection of practical tools for instructors to promote students’ conceptual understanding in a supportive, mutually beneficial learning environment.