Using Behavioral Design to Make Digital Health Personal

March 10, 2021
Mary Caffrey

Mary Caffrey is the Associate Editorial Director of AJMC/Managed Care for MJH Life Sciences. Her editorial responsibilities include Evidence-Based Oncology, Chief Healthcare Executive, and Managed Healthcare Executive.

Dr. Mitesh Patel, director of Penn Medicine's Nudge Unit, explains how a field of research rooted in economics is now bringing clinical benefits to patients within a health system.

Mitesh Patel, M.D., M.B.A., M.S., has many titles, including associate professor of Medicine and Healthcare Management at Penn’s Perelman School of Medicine and the Wharton School, and staff physician at the VA Medical Center in Philadelphia. But the one that might get your attention is director of Penn Medicine’s Nudge Unit, the world’s first behavioral design team embedded within the operations of a health system.

Here, Patel and his team study how daily behaviors influence long-term health outcomes. More and more, research shows that what a doctor does during a 15-minute office visit may matter less in determining who ends up back in the hospital after a heart attack or stroke than what happens between office visits. So, if Medicare and other payers are going to reward or punish health systems based on their ability to change people’s behavior, nudges count.

Nudge theory gained popularity following a 2008 book by Cass R. Sunstein and Richard H. Thaler; the latter won the 2017 Nobel Prize in Economics for his idea that human choices are not entirely rational but instead are bound up in personal traits, including a lack of self-control. Thaler’s mentor, psychologist Daniel Kahneman, won the 2002 Nobel in Economics for work with Amos Tversky that found people make irrational choices based on perceived rewards, even if their actions bring long-term harm. Both Kahneman and Thaler are considered pioneers in the field of behavioral economics.

With rising obesity rates, more heart disease among the poor, and soaring Medicare costs, there’s been more interest in applying nudge theory to healthcare. Public health experts note that nudges have been used against people for years from the way supermarkets are laid out to the way computer interfaces promote sedentary behavior. Positive behavioral change, by contrast, is hard won. The National Diabetes Prevention Program, which grew out of a study by the National Institutes of Health, is based on the idea that healthy habits cannot be achieved overnight, but instead require making small changes over time. In fact, the co-founder of the digital weight loss company Noom, Artem Petakov, developed its foundational principles from studying psychology at Princeton under Kahneman.

Adding Digital Health

The concept of combining nudges with digital prompts to remind people to take medication or coach them toward healthy behavior has been well-studied over the past decade. But as researchers and employers found, early effects often did not last, and not every intervention worked for every person.


This is where Patel and his collaborators are taking digital health and nudges to the next level. Recent work includes studies in behavioral phenotyping, which recognizes that individual personality traits can dictate what types of digital interventions might work—and which ones might fail. Patel has also explored how making the intervention competition, or “gamification,” can have a measurable impact on some participants.


The Penn Medicine Nudge Unit has several studies that will report results in 2021 an 2022 and will host a virtual symposium May 20, 2021. Recently, the unit has been publishing results on strategies to boost vaccination rates—conducted with Walmart and Geisinger—that showed how sending text messages telling a patient that a flu shot or vaccine was “waiting” for them would boost rates 11%.

Patel spoke recently with Chief Healthcare Executive™ about the work of the Nudge Unit and why using competition in a behavioral intervention may work for some people but not others. Responses have been slightly edited for clarity.

CHE: A Nudge Unit is not something we typically see in a hospital or a health system. Can you explain what it is, and who are the scientists who would work in a Nudge Unit?

Patel: A Nudge Unit is a behavioral design team. The first nudge unit in the world started in the United Kingdom within their government, and was basically made to improve the lives of the UK citizens; it eventually spread all over the world to other governments. And then we ended up starting the first Nudge Unit within a health system. We have a team that is made of behavioral scientists; myself included. And we have other staff that lead things in data science, project management, and people with expertise in clinical trials. We also have an advisory board from leadership from the health system that includes information technology, clinical care, and experts in behavioral economics. The flavor of nudge units can differ based on the strengths of the health system, but it usually incorporates a group of behavioral or social scientists.

CHE: Research in diabetes care and cardiology shows just how difficult behavioral change is. Based on your work, what is it about behavioral change that makes it so hard?

Patel: There are a several things that make changing behavior hard. One is that many of the benefits that we get from changing behavior occur far into the future. So, if we exercise every day, we eat really well and avoid sweets and temptations that are around us in our everyday lives, then 10, 20, 30 years down the line, we're less likely to have a heart attack or stroke. We have to compare the benefits of not eating a piece of cake or laying down and watching TV with exercise—with the enjoyment of that pleasure we get today. It's really hard for people to process and prioritize things that are far off in the future. And that's one of the underlying behavioral principles—that we heavily discount the future compared to what we're doing now and in the present.

The other [issue] that I think is often unnoticed is that the environment, the world around us, and the way we make decisions, whether it's a clinician making decisions in the electronic health record, or it's an individual making decisions going to a grocery store, is heavily influenced by the design of the environment. What is the order of the different options within the electronic health record? In a grocery store—the fact that they put sugary beverages and candy right at the checkout register, to try to make a quick purchase—those things have a big impact on our behaviors. So, a lot of our work [concerns] how can we change that choice architecture to be better aligned with people's long term goals?

CHE: Is it harder for some people to make behavioral change than others?

Patel: Some people are much more motivated than others. In general, everybody has a hard time changing behavior, but [there are] definitely people who are less motivated around the specific behavior. There are some folks that may be economically or resource or socially disadvantaged, and that makes it harder because they have other challenges that they're facing. … If you don't have the resources to be able to afford a gym membership, or buy a wearable device or purchase healthy food, which is often much more expensive than sugary or less healthy food, then it obviously makes it harder for you to change your behavior.

CHE: Tell us about your work that involves the use of competition as an incentive to encourage healthy behavior.

Patel: A lot of the work in behavioral economics [that involved] helping individuals change their behavior—things like getting people to be more physically active, lose weight, take their medicines—started with financial incentives. It used behavioral economics and design incentives as either lotteries or put incentives up front that people could lose. And those tended to work in the short term, but didn't have as much success in the long term. Plus, they were also really expensive.

So, one of the areas that we've been working on is the idea of using social incentives and gamification—the idea that gamification exists widely in insurance programs and digital health apps, but it often doesn't use these same principles of behavioral economics. We put points up front that people can lose, we have levels that people can go through, but we harness social incentives by changing the design of the game. So, it's either based on competition, collaboration, or support.

One of the larger trials we did with overweight and obese adults (STEP UP) enrolled people in 40 U.S. states and tested those three approaches head to head. This involved a large consulting firm, and we found that competition was the best way [to get people] to increase their step counts—by about 950 steps per day.

And when we turned off the interventions, [and the competition arm] was the only one that built a long-lasting habit. In the last three months of the nine-month study, when we didn't have any gamification intervention, those participants continued to walk about 500 steps a day more than the control arm.

Now, that might differ based on who is in the study. These were younger individuals from a consulting firm; we've also done studies with older patients who have more chronic disease. And we've done a study with the Framingham Heart cohort, and we find that in some of those situations, a collaborative or supportive game actually works better, because in those cases, the group is different.

I think the design of that social incentives really depends on the structures of the of the group, and also which type of patient population you're targeting.

CHE: What are we learning about different patient populations, in terms of matching these structure to the patient group?

Patel: This is an area that we call behavioral phenotyping. We're really excited about this. In all our clinical trials that use social incentives and gamification, we ask the participants to fill out a list of different validated survey instruments that will help us understand, what are their characteristics? And how do those impact which design of the game works best? Some of the things we're looking at—[for each] personality type, what are the risk preferences? Are they risk taking [or] risk averse? How is their social network and engagement with their peers? Do they have a lot of social support that they can lean on in hard times? Or are they not so socially connected?

For one study and a follow-up to the larger one I mentioned before, around competition and consultants, we found that for people who are extroverted and motivated, competition works the best but actually wore off when they didn't lead their sustained effects in the follow up period. The group that benefited the most were those that actually had lower activity to start with. And they weren't that socially connected, it turned out providing them with any of the three—support, competition, or collaboration—was helpful, because they didn't have social support to start with.

There was also a third group, we called at-risk and less motivated. They tended to be more neurotic, they had poor sleep. And they had [inferior] grit scores and things of that nature. Actually, none of the interventions work for them, they needed something more hands on, than the digital approach that we were using.

CHE: Do the EHR systems used in today's hospitals collect the information that you would need to identify which patients should go into the various behavioral interventions?

Patel: That's a great question. Several electronic health records allow the health system to ask patients to fill out surveys. One of the most common is a depression screening survey. The surveys that we used in the STEP UP study that I mentioned had about 150 questions—that’s not something we would want to ask to a patient before they go to a doctor's visit. What we've been doing instead is for all those patients that we did the surveys, we also have access to their electronic health record data. And what we're trying to find is links, so if you know someone's extroverted, is there something in the electronic health record that's already captured? That you could do. Or if you know someone has a large or a smaller social network? Could you look at their social contacts in the electronic health record? Would that give you clues, and we really think that is the bigger opportunity with EHR is to look at some of these data sources that are typically not examined, but can be very informative when we think about people's behavior.

CHE: So, it's possible to extract the basic health record and have an easy patch, or a program that you can run it through. And that would be how you would identify and target them to the intervention.

Patel: Correct. Right now, we're trying to validate the different measures, such as social context. Perhaps you tend to see a primary care doctor of a different gender—you might be more extroverted than introverted. That's hypothesis we have, and we have the data on about 2,500 patients. We're testing those things. The next step would be to pull that data in a way to manage patients to interventions.

CHE: Are there particular populations or types of employees that you've studied that for whom competition is ideal? I thought about salespeople, for example. Years ago, I worked for a municipality, and the police officers used tremendous amounts of healthcare. When I talked to counterparts in other towns, it didn't seem to matter whether the police worked in a busy urban area, or out in the suburbs where there was comparatively less crime. So, any idea that would get them to use less healthcare would be welcome. When I read about these competition interventions, I thought this would be an ideal, because police officers are very competitive. Are there groups like that you have studied?

Patel: That's a great question. Competition, we found, tends to work for certain groups of people who you probably would think would be more competitive. Doctors are very competitive, consultants that we work with in a large study are a very competitive group—and also tend to be like a younger kind of group.

The other thing that it's important to consider is that is the dynamic of who's in the group. In that study, we put people into groups of three, but they didn't know each other. So, we find competition works better than collaboration or support when the people in the group don't know each other. Because if you think about support, or collaboration, those would tend to work better when you do know the other people in the group, because you're trying to support each other work together. Whereas competition is almost an individual thing against the rest of the people in the group. And so, we did a study with the Framingham Heart cohort where we enrolled the entire family and for that, collaboration worked really well, because the families mostly live together, they've known each other for a long time. And when the study ends, they continue to live together.

CHE: How can the CEO of a health system get more involved in driving behavioral change, say, in the patients the system serves or the population in their community? Where would A CEO get started in bringing some of these ideas to the community?

Patel: We’ve been doing a lot of work to disseminate these insights. If health systems want to collaborate, we have a nudges and healthcare symposium every year—the next one is on May 20. Because it's virtual, we're able to bring in a much larger audience. … We have David Halpern from the United Kingdom, the first [Behavorial Insights Team] that existed giving the keynote. … We also have other health systems that will be presenting their work. We publish the report and the video of the symposium on our website every year. The last two symposiums are [published].

Another way is getting a sense of who are the people in your health system that are working in this area. We find that a lot of health systems are already working on these things. They're already changing—not only putting alerts in the electronic health record, but they haven't thought about it from the behavioral science perspective. So, can you get some key stakeholders in your health system who are already working in this space and are interested in working together in a structured way to leverage these insights.

I often tell people, you don't have to start on day one with a Nudge Unit. The first thing to do is come up with a good project, an example of how you could implement a nudge to change. The way we started our unit wasn't to put up a sign and say, “Hey, we're the Nudge Unit,” it was to show that nudges can be really effective. That helped get buy in from leadership and helped us get the resources we needed to more at a larger scale.

CHE: So, how does that work? What are some good examples of technology, making that just work better or are working at all?

Patel: One of the best ways we've used technology is to reduce burden on clinicians and patients. A lot of people think nudges are reminders or alerts and that it's more information. One example that a colleague of mine [created] was a project in cardiology, where the goal was to refer patients who had heart attack or stroke to cardiac rehab, which is a structured exercise program that's known to reduce mortality reduce readmissions by up to 30%. In our health system, and many others who struggle to do this effectively, only 15% of patients were referred. So, if 100 people had a heart attack at our hospitals in a month, 85 would go home never being referred for cardiac rehab, which is essentially a free gym membership with a cardiologist available for guidance. But that's a national problem—one fourth of hospitals refer less than 20% of their patients. And what we found was it's just very hard for clinicians to do this on a busy day of rounds—to figure out who are the patients to refer, and fill out a long form. And then the patients were on their own to find a cardiac rehab center, because we didn't have one at our health system at the time. So, the patients had to go into the community and call our insurance program. We used the electronic health record to automatically identify the patients. It took a little bit of work, and we spent three months figuring out who are the right patients that we should be referring to get that manually created.

And then we coded that and linked that to our secure text messaging platform, which texted the care manager every day on rounds with the names of the two or three people in their rooms. We then templated the form, so the clinicians didn't have to fill them out. We worked with them to redesign the rounding and discharge process. So instead of asking the question, "Who should be referred for cardiac rehab?' they were presented with, "This patient meets criteria for credit rehab, is there any reason they shouldn't go?" Essentially it became an opt out instead of an opt in, and the referral rate went from 15% to 85% or 90%. And has been sustained that way for two years. We just published those results in JAMA Network Open (Adusumalli et al, January 14, 2021).

Also, because we were able to do it in a systematic way, we compared it at one hospital to two control hospitals, it unlocked some insights into how powerful that could be. The other control sites ended up adopting a similar platform, and their rates went from 5% to 75%, before and after implementation. And if you think about it, we just made it easier for clinicians and patients to do the things that they have wanted to do, by leveraging technology to identify and refer patients in this automated way.

CHE: It sounds like using technology would be less expensive—you didn't have to hire a bunch of staff, which would normally be a cost impediment.

Patel: Right. Many behavioral interventions in the past have been very personnel intensive, and have failed to scale because of those costs. We tend to use electronic health records and mobile technologies as much as we can, because that gives us the ability to test something in one clinic or hospital at work, scale it throughout medicine, but then also scale it to other health systems as well.

CHE: What studies do you have underway that we should look forward to in 2021?

Patel: We have several exciting studies. A couple are around nudges that focus on clinicians. We have one large study at Sutter Health System in California, where we have 48 different sites, [both] emergency departments and urgent cares [centers]. We randomly assigned clinicians to get feedback on their opioid prescribing with the goal to reduce opiate prescribing these acute pain settings. We either use peer comparison feedback, where we showed the clinicians when they were outliers compared to their peers, or we use what's called audit feedback—we reminded them that the health system is watching and looking for outlier prescriptions, very large ones. And the results hopefully will come out later in the year.

Another is on statin prescribing. We're testing nudges to clinicians, patients, or both before a doctor's visit with the patient. The clinician gets monthly feedback, but they also get an alert in the electronic health record with a pre-templated order for statins if a patient should be on one, but it isn't. And then the patient gets a text message before the visit, kind of like a chat bot that walks them through the risks and benefits of a statin and goes through a decision tool. There's been not as much work looking at those two things, clinician and patient level nudges and then the interaction of those two. And so, we're really excited about that project.

On the patient side of things, we have a study with remote monitoring around goal setting. Among patients living in lower-income neighborhoods who are at high risk of heart disease or already have heart disease, we randomized people to either choose a goal or be assigned one. Then, we randomized people to either start that step goal right away, or start it gradually over time. Think about your typical wearable device or smartphone app—typically they tell you a goal of 10,000 steps and start today. So, they're assigning a goal and they're saying, "You have to start now." And we wondered, if we let people pick a goal, they might be more intrinsically motivated to achieve that. Then, we wondered whether or not we should start people [at the maximum goal] right away, or we should gradually let people [work up to that]. So that's a trial with 500 patients. Hopefully, it will come out later this year as well.