Partners for Our Children

blog

Why Didn’t Anyone Do Anything?

They capture our attention in the headlines – the stories of a little boy or girl who died at the hands of a parent or caregiver. These stories are heartbreaking, and we find ourselves asking, “Why didn’t anyone do anything? This could have been prevented.” But the unfortunate reality is that these situations are extremely difficult to predict. Nevertheless, just as doctors use technology and their personal experience to diagnose their patients, social workers are equipped with risk assessment tools and personal judgment to help them determine how to serve a family to reach the best possible outcome. These risk assessment tools are statistics-based or actuarial and they are used more and more in determining 1) whether a child abuse report should be investigated and 2) whether a child can be safely left at home or must be removed. But how accurate are they?

The short of it: they aren’t perfect. We’ve discussed statistics of child fatalities before – it really boils down to the fact that it’s nearly impossible to predict the outcome in every case. Families don’t follow the exact same path, so how could we possibly know when it will end in tragedy? We can’t entirely, but statistics can at least help to identify child maltreatment.

First, let’s dive into the technical details. To understand the limitations of these risk assessment tools, we first have to define accuracy. Researcher Eileen Munro demonstrates a way to intuitively understand accuracy – it is not only a function of the assessment of a particular family, but also of the overall rate of child maltreatment in society. She uses “sensitivity” and “specificity” values of .69 and .74 from a paper by Zuravin, Orme and Hegar (1995) and a base rate of founded maltreatment of 40% of CPS referrals. Here is how Munro’s example breaks down:

""

What do these numbers mean? Basically, sensitivity is the probability of correctly identifying maltreatment, specificity is the probability of correctly identifying no maltreatment, and base rate is the average rate of CPS calls that are actually linked to maltreatment. In this example, sensitivity is 276/400, or .69, which means that 69% of the 400 families suspected for child maltreatment were correctly identified and stopped. On the other hand, 124 (31%) of these families were not correctly identified, leaving the child(ren) in danger. The same principle applies to families on the other side – with 74% being correctly identified as not maltreating, but 26% of families who were thought to be safe were actually maltreating their children. Based on this, what is the accuracy of the assessment tool? The probability of a positive report being accurate is 276/(276+156) or .64 – meaning the tool gets it right 64% of the time. So as you can see, the assessment tool is not perfect, but it can be a useful tool in decision-making when combined with professional judgment – gained from the social worker’s years of experience.

But the story isn’t over. These numbers only reflect accuracy in assessing maltreatment. If we zoom in on child fatalities, we’re looking at extremely small numbers. When you do the math, the smaller the number, the higher the likelihood of false positives – in this case, likely more than 99% false positives. To put that into context – let’s say that of the families who are identified (by actuarial tools alone) as being at risk of harming a child enough to cause death, 99% of them are actually not at risk. So what do we do? Remove all of these children or keep them from returning home?

As you can see, the decisions of social workers and their supervisors are very difficult. Even the best combination of experience and use of risk assessment tools cannot always predict child abuse or neglect – human behavior is just too unpredictable. Yet, these social workers continue to work in good faith day-in and day-out to protect these children – a sometimes thankless and emotionally exhausting job. So the next time you find yourself asking “why?” – remember that these decisions are extremely difficult, even with years of experience and the best scientific evidence available.