• Politics
  • Diversity, equity and inclusion
  • Financial Decision Making
  • Telehealth
  • Patient Experience
  • Leadership
  • Point of Care Tools
  • Product Solutions
  • Management
  • Technology
  • Healthcare Transformation
  • Data + Technology
  • Safer Hospitals
  • Business
  • Providers in Practice
  • Mergers and Acquisitions
  • AI & Data Analytics
  • Cybersecurity
  • Interoperability & EHRs
  • Medical Devices
  • Pop Health Tech
  • Precision Medicine
  • Virtual Care
  • Health equity

Digital Footprints Track Back to Real-World Health

Article

Online behavior, mental health, and advertising commingled in cyberspace.

footprint

Changing consumer behavior to improve health is a goal of digital health efforts. Knowing personal behavior motivators can also help sell products. In terms of health data privacy, can we allow good influences while eliminating exploitation? Health information has a huge potential to change personal behavior.

But what if it knew mental health risk of depression and used it to sell weight loss pills? Are there considerations for using mental health information to direct me to information? Reliable medical advice can be skewed by advertising even if that is not the intention of advertisers, and lack of data clarity is changing our judgement about health.

More questions than answers are currently available in the landscape.

In the Mood for Shopping

Are the ads and social media we sending mental health information to advertisers? How does this impact social media users? Online behavior can predict violence or mental health status. Behavior on social media changes when someone has a mentally altered status, which can be predicted by data collected on social media. For patients with certain mental illnesses, risky behavior such as extreme spending is expected. Mobile phone usage is impacted by affective disorders. A mobile application might be able to identify or predict risk of mania.

The problem is whether the people doing the predicting are trying to protect patients... or profit margins.

Facebook has also done experiments in how what you see impacts your mood. In 2012 they published a study about impacting someone’s mood based on what they were shown. The current bills do not do enough to address the ethics of mood manipulation through digital mediums. We know that social media can be used to manipulate mood and that their ad programs are designed to be profitable.

The line between ethics and protected health status are not clearly defined.

Our Digital Footprint

Each individual has unique behavior patterns that create a digital footprint. Companies have accessed consumers’ digital footprints to influence their health decisions and sell products. For example, companies can market baby clothes to an individual if their digital footprint indicates pregnancy. When I found out I was pregnant, for instance, I started getting ads for prenatal vitamins and cord blood storage and child care.

Airlines might use someone’s flight search history to track demand — and raise ticket prices. Butwhat should happen when the digital footprint reveals an individual’s personal health vulnerabilities? The data algorithm could prompt pharmaceutical advertisements for consumers searching for doctor’s offices or ailment symptoms. Is it ethical to use the individual’s digital footprint to market health products or medical products?

I asked Pooja Lakshmin, M.D., psychiatrist and assistant professor at George Washington University, about mental illness and privacy. What risks are there for patients who don’t realize how much information marketing companies are using to motivate purchases?

“People who are struggling with mental illness are more vulnerable to these types of privacy issues. If these apps are monitoring google searches, or other online activity, and a patient is searching a specific medication for say, bipolar disorder, how does that data get used?” she said. “I worry that patients who are having difficulty with emotional regulation and impulse control could be targeted for advertising treatments or products that are not evidenced based. These folks may be more desperate for a cure and thus more susceptible to deceptive marketing practices.

“Moreover, on social media platforms like Facebook, people are revealing a good deal of personal, protected health information in private support groups,” Lakshmin added.“However, if social media platforms are collecting, storing and selling this data - these groups are not in fact private.”

Being more susceptible to deceptive practices while revealing personal weaknesses puts patients in a position where for profit companies can use impulse control to increase their revenue.

Employee Wellness

Health tracking devices and online communities allow many individuals to share private information without the fear of judgment. Some employers run health-promoting incentive programs to motivate employees to reach fitness goals. Such organizations rely on these apps to track an individual’s performance. For example, Idaho’s Health Matters program, whose mission is to incentivize healthier habits for state employees, has allowed individuals to share their personal fitness app data for a step-counting competition.

While Health Matters was voluntary, there is the risk that employers might use health data for more cost-effective, and arguably punitive, measures, such as tying insurance benefits rates to physical activity. Information tracking is not perfect, and incentives create the risk that individuals will manipulate their digital health data. My neighbor used to send her child to soccer practice with her Fitbit. That’s an easy insurance discount guarantee. Any time people know what the system is, they will game the system.

Yet the data shared online and through other digital tools can be more accurate than self-reported behavior at the physician’s office. I spoke with Matthew Fisher, J.D., Partner, Mirick O’Connell about the potential impact of health data regulations. He mentioned the importance of regulation that can balance positive aspects of social media and patient protection.

“While (data security) is a good goal, the devil will be in the details in terms of how regulations are promulgated,” Fisher said. “For example, social media is designed to encourage and promote sharing of information. If too many restrictions are put into place, then the benefit that can be derived could be undercut. Ensuring privacy of information, along with awareness, will be a delicate balancing act.

“Hopefully, the regulatory process, if it can occur upon passage of a well thought-out law, will consider and blend perspectives from all viewpoints to help craft regulations that reflect a diversity of viewpoints and understandings,” he added.

People frequently share things on social media that they would not want their employers to see. If you called in sick to work and post a picture of your weekend at the coast, you need someone to help you improve your judgment. What if your employer could also see that you were part of a patient group for cancer? Or a potential employer? Current regulations create vulnerabilities for patient privacy.

Individuals don’t realize all the ways in which they are sharing too much information.

Mental Healthcare Parity? Uninsured.

If you can predict someone’s spending habits and the cost of care delivery, would you bet on a patient you know will not pay? Or a patient that will bankrupt your healthcare company? Health data being collected by marketing companies has implications for all health data sets. Would a bill about health data impact existing HIPAA covered entities? Do predictive analytics for healthcare cost and risk count as marketing?

Currently there are companies that market predictive tools to insurance companies. Information like credit score and social determinants of health can help healthcare providers understand patient needs and risk. There are many different ways to predict cost of care that are currently used by healthcare systems. Digital and app data might have an impact on those types of predictions.

Patients are protected from insurance discrimination based on pre-existing conditions. But is your digital twin protected? The potential for how much your digital behavior can reveal about your protected health classes is unclear, as is if that data is protected.

Predictions can also create potential vulnerabilities for discrimination. If your online behavior impacts future healthcare costs and companies are not regulated they might use that data to create personalized cost of health insurance or bar care based on patient's ability to pay. Does an ability to pay profile obtained through social media data represent an ethical concern or a way to exclude patients with potentially high costs of care?

Data tracking errors make vulnerable patients at risk of errors and exploitation. I asked about this experience and George Dawson, M.D., shared his experience with having his car insurance premiums change based on an error.

“I was (and am) a customer of the same auto insurance company for 30 years and insure two cars with them,” Dawson said. During that time we never missed a payment. I received a notice that my premium was being increased significantly because I had an adverse rating on my credit report. The credit report error was a mistake-but we had to contact the credit reporting company to get the mistake corrected before our auto insurance premium was restored.”

If your credit rating can impact a premium for car insurance, any mental health tracking that identifies your status might also be used to impact the cost of care. But what if you are already struggling to make ends meet and under a lot of financial strain?

“The implication for patients is that it is arbitrary taxation (because they can apparently do it) irrespective of whether they are in good standing. It is another unnecessary financial burden often by the people who can afford it the least.”

Identifying patients that are at risk could mean that those least able to afford care are charged more. It can also be used to price sick patients out of insurance. In some cases, a patient might be identified as a high risk by an error. Our healthcare costs are such that many patients are becoming self-pay, and this type of discrimination would mean they couldn’t afford coverage.

Honest Data and Mental Health Wellness

Our social media footprint does not lie. Some of our data from other tracking apps is subject to human use - and misuse. Getting better data for healthcare would be an advantage, but that data can be used by bad agents. Connected technology has the potential to improve mental health, or exploit the most vulnerable. The data we share on social media could easily be part of our healthcare profile.

Expectations that patients should understand the implications of every click and search online are unrealistic. Not knowing that tracking devices and mobile apps directed at health are not subject to privacy protection can mean patients unknowingly share their mental health status. A lack of clarity can mean patients assume the worst. When they are not protected, the worst assumption might be correct.

Until we have comprehensive health data privacy protection, patients with mental illness will be exploited and insufficient penalties will not offer the protection they need.

Get the best insights in digital health directly to your inbox.

Related

Will Digital Tools Improve LGBTQ Health?

Mental Health Apps: Do They Work? Are They Safe?

Mental Health Center of Denver Is Using VR Technology to Bridge the Care Gap in Behavioral Health

Related Videos
Image: Ron Southwick, Chief Healthcare Executive
George Van Antwerp, MBA
Edmondo Robinson, MD
Craig Newman
© 2024 MJH Life Sciences

All rights reserved.