Where Analytics Go Wrong (Part 1)

analystics-article1.png

In a 2017 Inside Higher Education article, Jeff Aird argues that, “until higher ed uses analytics in a self-aware and brutally honest way, it can’t fix the growing problems with student success and retention.” So, what are some of the issues Aird sees that inhibit effective use of analytics?

The problem as he sees it is that many colleges and universities have an outward focus when considering factors related to completion and achievement gaps at their institution.  Perhaps the culprit is the K-12 system that falls short in achieving good results in preparing students for college. Alternatively,  we may see the root of the achievement gap as being due to “at risk” students having full-time jobs,  family and economic pressures, or having expectations that are not realistic. 

analystics-article2.png

While these factors are largely true, by putting them as the focus of our interventions, we block a perspective of a true self-reflection needed to improve organizational design, not to mention placing limits on how we think about who is “at risk.” 

But how would our understanding of the problem change if instead of taking an institutional view, we looked at the issue from the “at risk” student’s perspective. What, if anything might we learn from turning the lens around? How would the student’s perspective define the problem? 

Aird offers a somewhat humorous  yet insightful way he identifies such students: 

Yeah, it’s not that hard; just start pointing . . . I didn’t need help finding the at-risk students. They were everywhere. I needed to better understand why the systems and processes in our college were not already helping them.

Many student analytics strategies follow an interventionist model.  The scenario has the students following the traditional admissions and enrollment process until at some critical point they act in such a way that suggests their likelihood of success has declined.  The typical markers of distress include having done poorly on an early exam, missing a few classes, dropping a course, or not meeting with their adviser. At this point, our analytics models signal that intervention is needed. The hope is that this “just-in-time” extra support will meet the students when they most need it and help them solve their problem.

Typically, the interventionist model, assumes that students bring with them unique and individual characteristics and background, which, when played out in the higher education system, predispose them to become at risk for failure. The belief is not only that the power of analytics can identify these students early enough, but also that these large, bulky and bureaucratic institutions can customize individual interventions to increase the likelihood of completion.

The shortfall of this model is that it focuses on introducing new processes to fix the student’s problem. Unfortunately, it largely ignores what we can do to keep students from becoming at risk in the first place.

Aird suggests that if we can genuinely see the problem from the “at risk”  students’ perspective, we would begin to see that the choices, behaviors and actions we deem “at risk” are often explained as natural and even expected outcomes given the way we design their experience. Then we would discover that much of the student success problem resides not in at-risk behavior, but rather the business model, systems and processes that produce at-risk students and then try to fix them.

Are you using your analytics in an interventionist or proactive way?  Have you thought about how this matters? As Aird suggests, maybe we don’t need intervention for the students, but rather for ourselves.

By Terry Mills


Brad LewisComment