HCN News & Notes

UMass Amherst Biostatistician and Team Assist CDC in Flu Forecasting

AMHERST — When the Centers for Disease Control and Prevention (CDC) was looking for a preferred flu-prediction model for use in its flu-forecasting challenge this season, an influenza-tracking model developed by a team led by biostatistician Nicholas Reich at UMass Amherst rose to the top and will be one of its principal prediction tools. 

“We competed in the CDC challenge last year, and out of 30 models the CDC received to help predict the flu season, ours was the second-best,” said Reich, whose UMass-led collaborative is made up of six teams. “This year, we’re retooling some of our models, and the CDC has chosen our model to try to optimally predict seasonal influenza outbreaks.

“It’s a nice accomplishment,” he went on. “Apparently, our collaborative approach that fuses multiple different models together impressed them as being better and more reliable than all of the other approaches they have seen over the past few years. We hope we can contribute quite a bit to this year’s efforts.”

In an international group dubbed the FluSight Network, Reich, at UMass Amherst’s School of Public Health and Health Sciences, with colleagues at Carnegie Mellon University, the CDC, Columbia University, Los Alamos National Laboratory, Mount Holyoke College, and a consulting group from South Africa called Protea Analytics, issue a new flu-season forecast starting in late fall every Monday for public-health researchers and practitioners. It compares the flu trajectory this year to past years.

“They work year-round to develop a way for all their models to work together to make a single best forecast for influenza, a method they call a ‘multi-model ensemble approach,’” Reich explained.

Last September, Reich was one of just four influenza forecasters in the nation invited to participate in the CDC’s first flu pandemic simulation workshop, which included mock press conferences by officials, including the CDC director. The exercise allowed the agency to run through several scenarios about how a flu pandemic might be forecast from early data, how it could be tracked, and how integrating advanced analytic processes into decision making might assist with those projections. 

Among other things, Reich noted, the workshop was a recognition by the CDC that a network of forecasters that exists in academia and industry today, but not in the public sector, might help decision makers to use the data as one of their inputs when making choices.

“We work very closely with our collaborators at the CDC,” he said. “Without their vision and careful design of this challenge five years ago, we wouldn’t be where we are today. This collaboration has added a lot of value to the laudable efforts that they have made over these years to integrate data and modeling into real-time public health decision making. The value of the ensemble approach is becoming clear to all observers, and that workshop helped to demonstrate it.”

Now in its second year of participating in the CDC challenge, the FluSight Network gets a little better each year at refining flu projection models, Reich said. These help the experts prepare public flu messages, assess disease severity and regional incidence, and project peak impact, among other factors important to public-health officials.

Not only health professionals, but health writers and reporters watch the CDC’s weekly updates each week, he added. “The first question of the season is when is it going to start. That is, when will the number of cases go above the baseline of flu activity by region, which is the first checkpoint of every season. Hospitals, clinics, and family physicians all keep their eye on this information to help them prepare.”

So far this flu season, he noted, a few regions of the U.S., including the Northeast, have been seeing slightly higher levels of flu-like activity than normal, but the most recent data suggest that the levels are still below what the CDC defines as a ‘baseline’ level of activity.

“That said, our models are saying that we should perhaps expect a bit of an early onset to the season in the Northeast and a few other regions,” Reich went on. “The ensemble model isn’t picking up a clear signal yet about how different from a normal year the peak incidence might be in terms of timing or severity. In coming weeks, we think the models may show a bit more separation from the historical average. Right now, though, it’s a little like looking at a 30-day weather forecast and trying to use that to decide whether it’s going to snow on any particular day. Our models just can’t reliably see that far into the future at the moment.”

For their prediction efforts, each team submits eight seasons of influenza forecasts from the past to the UMass-led model. “This approach allows us to do better than a simple average of all models because we can employ them proportionally based on their success,” Reich said. “Each model has different strengths based on the data or the methodological approach they use. Some models this season are incorporating data from a variety of internet data, including signals from Twitter, Google search activity, and Wikipedia.”

Comments are closed.