The Accessibility Police: How the ADA Education and Reform Act Hinders ADA Enforcement and Burdens Americans with Disabilities

By Shailin Thomas

Recently, the House of Representatives voted on and passed the ADA Education and Reform Act of 2017 — an update to the Americans with Disabilities Act of 1990 (42 U.S.C. § 12101 et seq.). The bill changes the process by which private citizens with disabilities and disabling medical conditions can bring lawsuits to enforce statutory access requirements for places of public accommodation. Under Article III of the ADA, “No individual shall be discriminated against on the basis of disability in the full and equal enjoyment of the goods, services, facilities, privileges, advantages, or accommodations of any place of public accommodation.” 42 U.S.C. § 12182. This covers attempts to explicitly discriminate against those with disabilities, refusals to make reasonable modifications to accommodate them, and failures to remove physical barriers to access for them — unless removing those barriers is not “readily achievable.” 42 U.S.C. § 12182(b)(2)(A). One of the primary enforcement mechanisms for these provisions is private litigation brought against non-compliant establishments by those negatively affected by violations. See 42 U.S.C. 12188.

As of late, however, there has been growing concern in Congress that this private enforcement avenue is too often abused by plaintiffs bringing unjustified or opportunistic lawsuits, and this is the issue the ADA Education and Reform Act of 2017 seeks to address. Under the proposed bill, lawsuits can no longer be immediately brought against non-compliant establishments. Instead, someone aggrieved by a failure of adequate access must send formal, written notification to the establishment and provide at least four months for the owner to begin dismantling the offending access barrier. Only then — if the owners fail to start the necessary improvements for four months — can a lawsuit be brought. Proponents of the bill believe these additional barriers will curb frivolous and abusive ADA lawsuits brought to enforce accessibility requirements against unsuspecting businesses unaware of their violations.

Read More

Democratized Diagnostics: Why Medical Artificial Intelligence Needs Vetting

Pancreatic cancer is one of the deadliest illnesses out there.  The five-year survival rate of patients with the disease is only about 7%.  This is, in part, because few observable symptoms appear early enough for effective treatment.  As a result, by the time many patients are diagnosed the prognosis is poor.  There is an app, however, that is attempting to change that.  BiliScreen was developed by researchers at the University of Washington, and it is designed to help users identify pancreatic cancer early with an algorithm that analyzes selfies.  Users take photos of themselves, and the app’s artificially intelligent algorithm detects slight discolorations in the skin and eyes associated with early pancreatic cancer.

Diagnostic apps like BiliScreen represent a huge step forward for preventive health care.  Imagine a world in which the vast majority of chronic diseases are caught early because each of us has the power to screen ourselves on a regular basis.  One of the big challenges for the modern primary care physician is convincing patients to get screened regularly for diseases that have relatively good prognoses when caught early.

I’ve written before about the possible impacts of artificial intelligence and algorithmic medicine, arguing that both medicine and law will have to adapt as machine-learning algorithms surpass physicians in their ability to diagnose and treat disease.  These pieces, however, primarily consider artificially intelligent algorithms licensed to and used by medical professionals in hospital or outpatient settings.  They are about the relationship between a doctor and the sophisticated tools in her diagnostic toolbox — and about how relying on algorithms could decrease the pressure physicians feel to order unnecessary tests and procedures to avoid malpractice liability.  There was an underlying assumption that these algorithms had already been evaluated and approved for use by the physician’s institution, and that the physician had experience using them.  BiliScreen does not fit this mold — the algorithm is not a piece of medical equipment used by hospitals, but rather part of an app that could be downloaded and used by anyone with a smartphone.  Accordingly, apps like BiliScreen fall into a category of “democratized” diagnostic algorithms. While this democratization has the potential to drastically improve preventive care, it also has the potential to undermine the financial sustainability of the U.S. health care system.

Read More

Should Medical Offices Be Run Like Law Firms?

By Shailin Thomas

Earlier this summer, the Supreme Court of Pennsylvania ruled that a physician cannot delegate obtaining informed consent from a patient to a member of her staff.  In Shinal v. Toms, a neurosurgeon perforated a patient’s cranial artery while resecting a tumor, which led to hemorrhaging, brain damage, and partial blindness.  The patient alleged that had she known the full risk of the surgery, she would have opted for a less dangerous course of treatment.  While the risks were communicated to the patient, they were communicated by the physician’s assistant, not the neurosurgeon himself.  After the lower courts both ruled for the physician, the Supreme Court of Pennsylvania reversed, holding that the courts below erred in allowing the jury to consider statements made by the physician’s assistant to the patient — because responsibility to obtain informed consent is the physician’s alone and cannot be delegated.  According to the court, “[i]nformed consent requires direct communication between physician and patient, and contemplates a back-and-forth, face-to-face exchange.”

While requiring physicians to give risk information in person sounds appealing, it runs counter to efforts to utilize physician time more efficiently.  Physician time is expensive — and rightly so.  After college, medical school, internship, residency, and any number of fellowships, physicians have undergone a staggering amount of training.  In light of this investment in human capital, it’s no surprise that the hourly rate for anything a physician does is astronomical. This makes sense when those hours are spent performing neurosurgery, reading radiographs, or engaging in other activities that require the full extent of a physician’s medical training.  But it can lead to sizable inefficiencies when those hours are spent on tasks which can be readily done by qualified staff members, such as nurse practitioners, registered nurses, and medical assistants, at a fraction of the hourly rate.

Read More

FDA v. Opana ER: Opioids, Public Health, and the Regulation of Second-Order Effects

Earlier this month, the FDA announced that it is asking Endo Pharmaceuticals to remove the opioid Opana ER from the market.  Opana ER is an extended-release pain reliever often abused by those who take it.  While opioid abuse is nothing new, and many opioids leave those who take them addicted to narcotics or heroin, Opana ER is particularly dangerous because of how people misuse it.  The pill was designed to prevent would-be abusers from crushing and snorting it —  a popular means of ingesting prescription opioids.  Without the ability to crush and snort the drug, however, abusers turned to dissolving the pills and injecting them intravenously, leading to outbreaks of Hepatitis C, HIV, and other blood-borne diseases.  In Indiana’s Scott County, for instance, the prevalence of HIV has skyrocketed since the introduction of Opana ER to the local population, with 190 new cases since 2015.

While this foray into public health is somewhat surprising — given the anti-regulatory stance of the current administration and its billionaire backers — it is precisely the type of initiative the FDA should be taking.  Public health is a central part of the FDA’s mission statement, which notes that the agency “is responsible for protecting the public health by ensuring the safety, efficacy, and security of human and veterinary drugs, biological products, and medical devices.”  Traditionally, though, the FDA’s efforts to ensure safety and efficacy have been limited to the narrow context of individual patients taking medications as directed under physician supervision.  As the FDA noted in its Opana ER press release, this is the first time it has requested that an opioid be taken off the market as a result of its susceptibility to abuse and the associated public health consequences.

Read More

Negligent Failure to Prevent Suicide in the Age of Facebook Live

By Shailin Thomas

In 2016, Facebook unveiled a new tool that allows users to post live streams of video directly from their phones to the social media platform. This feature — known as “Facebook Live” — allows friends and followers to watch a user’s videos  as she films them. Originally conceptualized as a means of sharing experiences like concerts or vacations in real time, the platform was quickly adopted for uses Facebook likely didn’t see coming. In 2016, Lavish Reynolds used Facebook Live to document the killing of her boyfriend, Philando Castile, by the Minneapolis police, sparking a national debate surrounding police brutality and racial disparities in law enforcement. Recently, another use for Facebook Live has arisen — one that Facebook neither foresaw nor wants: people have been using Facebook Live as a means of broadcasting their suicides.

This tragic adaptation of the Facebook Live feature has put Facebook in a tough spot. It wants to prevent the suicides its platform is being used to document — and just a few weeks ago it rolled out real-time tools viewers of Live videos can use to identify and reach out to possible suicide victims while they’re filming — but it’s often too late by the time the video feed is live. Accordingly, Facebook is focusing its efforts at identifying those at risk of suicide before the situation becomes emergent. It currently has teams designing artificial intelligence algorithms for identifying users who may be at risk for suicide. These tools would scan Facebook users’ content, flagging individuals that have warning signs of self-harm or suicide in their posts.

Read More

Medicare Advantage Might Have Potential — If Companies Play Fair

By Shailin Thomas

Medicare Advantage was introduced as a mechanism for capturing some of the oft-extolled efficiencies of the private health insurance market. Instead of paying providers for services directly, as in traditional Medicare, the government pays Medicare Advantage insurers a predetermined, risk-adjusted amount of money per patient to cover all medical expenses for the year. The risk adjustment ensures that companies insuring Medicare Advantage patients with chronic diseases — who will likely need more intensive, expensive care — receive additional funds to help cover those costs. For each qualifying condition a patient has, the Medicare Advantage plan receives on average an additional $3000 annually.

While the risk adjustment of Medicare Advantage payments was well intentioned and economically rational, it appears to have opened up an avenue for significant abuse on the part of Medicare Advantage insurers. The Department of Justice recently joined a lawsuit against UnitedHealth, a large provider of Medicare Advantage plans, for allegedly defrauding the government out of hundreds of millions, if not billions, of dollars. The complaint alleges that UnitedHealth “upcoded” its risk-adjustment claims by submitting for conditions patients did not actually have and refusing to correct false claims when it discovered or should have discovered them. In essence, the company allegedly realized it could extract more money out of the government by making the patients it covers appear sicker than they actually are, and took full advantage of that.

Read More

Artificial Intelligence and Medical Liability (Part II)

By Shailin Thomas

Recently, I wrote about the rise of artificial intelligence in medical decision-making and its potential impacts on medical malpractice. I posited that, by decreasing the degree of discretion physicians exercise in diagnosis and treatment, medical algorithms could reduce the viability of negligence claims against health care providers.

It’s easy to see why artificial intelligence could impact the ways in which medical malpractice traditionally applies to physician decision-making, but it’s unclear who should be responsible when a patient is hurt by a medical decision made with an algorithm. Should the companies that create these algorithms be liable? They did, after all, produce the product that led to the patient’s injury. While intuitively appealing, traditional means of holding companies liable for their products may not fit the medical algorithm context very well.

Traditional products liability doctrine applies strict liability to most consumer products. If a can of soda explodes and injures someone, the company that produced it is liable, even if it didn’t do anything wrong in the manufacturing or distribution processes. Strict liability works well for most consumer products, but would likely prove too burdensome for medical algorithms. This is because medical algorithms are inherently imperfect. No matter how good the algorithm is — or how much better it is than a human physician — it will occasionally be wrong. Even the best algorithms will give rise to potentially substantial liability some percentage of the time under a strict liability regime.

Read More

Artificial Intelligence, Medical Malpractice, and the End of Defensive Medicine

By Shailin Thomas

Artificial intelligence and machine-learning algorithms are the centerpieces of many exciting technologies currently in development. From self-driving Teslas to in-home assistants such as Amazon’s Alexa or Google Home, AI is swiftly becoming the hot new focus of the tech industry. Even those outside Silicon Valley have taken notice — Harvard’s Berkman Klein Center and the MIT Media Lab are collaborating on a $27 million fund to ensure that AI develops in an ethical, socially responsible way. One area in which machine learning and artificial intelligence are poised to make a substantial impact is health care diagnosis and decision-making. As Nicholson Price notes in his piece Black Box Medicine, Medicine “already does and increasingly will use the combination of large-scale high-quality datasets with sophisticated predictive algorithms to identify and use implicit, complex connections between multiple patient characteristics.” These connections will allow doctors to increase the precision and accuracy of their diagnoses and decisions, identifying and treating illnesses better than ever before.

As it improves, the introduction of AI to medical diagnosis and decision-making has the potential to greatly reduce the number of medical errors and misdiagnoses — and allow diagnosis based on physiological relationships we don’t even know exist. As Price notes, “a large, rich dataset and machine learning techniques enable many predictions based on complex connections between patient characteristics and expected treatment results without explicitly identifying or understanding those connections.” However, by shifting pieces of the decision-making process to an algorithm, increased reliance on artificial intelligence and machine learning could complicate potential malpractice claims when doctors pursue improper treatment as the result of an algorithm error. In it’s simplest form, the medical malpractice regime in the United States is a professional tort system that holds physicians liable when the care they provide to patients deviates from accepted standards so much as to constitute negligence or recklessness. The system has evolved around the conception of the physician as the trusted expert, and presumes for the most part that the diagnosing or treating physician is entirely responsible for her decisions — and thus responsible if the care provided is negligent or reckless. Read More

Maybe For-Profit Hospitals Aren’t So Bad

By Shailin Thomas

For-profit hospitals have taken their fair share of flack over the years. Much maligned by many in the medical community, they are seen as money-hungry corporate machines that pervert the medical profession by putting the bottom line before patient care. This skepticism of profit-driven hospitals feels right. Medicine has long been the purview of charitable organizations and religious institutions. It’s supposed to be a calling — a public service to which practitioners are drawn — not a check to cash at the bank.

As for-profit hospitals proliferated, there was research done suggesting they had quality and cost issues stemming from their profit motives. For-profit hospitals had higher mortality rates, employed fewer trained professionals per bed, and were more expensive than their non-profit and government counterparts. Researchers speculated that this was the result of duties owned to shareholders by corporate leaders or compensation incentives for executives based on profitability rather than quality of care. These studies seemed to confirm what many thought they already knew: medicine and money don’t mix well.

More recent studies, however, suggest that for-profit hospitals may have turned over a new leaf. Since 2010, for-profit hospitals have out-performed non-profits in the “Top Performer” evaluation carried out by The Joint Commission — an organization that accredits hospitals in the US — with a higher percentage of for-profit hospitals qualifying for the honor than non-profits. A study published in JAMA from the Harvard T.H. Chan School of Public Health found that hospitals that converted from non-profit to for-profit improved their financial position by increasing their total margins and experienced no change in mortality rates.

Read More

Will Medicare Reform be a Republican Obamacare?

By Shailin Thomas

As the health care community waits with bated breath to see what will become of the Affordable Care Act under the Trump administration, Republicans in Congress have set their sites on another health-related initiative that has been on their wish list for years: reforming Medicare. While Trump promised throughout his campaign not to change the fundamental ways in which Medicare works — in part to appeal to older voters, who overwhelming would like the program to stay as it is — shortly after the election, “modernizing Medicare” appeared as a priority on the transition website for the new administration.

The reform many Republicans are pushing for — championed by Speaker of the House Paul Ryan (R-WI) — is privatization along the lines of Medicare Advantage. Instead of providing for full insurance coverage through the government, as traditional Medicare currently does, Ryan’s proposal would have eligible patients purchase insurance from private companies with financial assistance from the government. The theory is that by having private insurers provide coverage, Medicare will capture efficiencies of the private market, while simultaneously offering consumers more choice in the coverage they receive.

After Paul Ryan first unveiled this plan in 2011, the Kaiser Family Foundation released a report detailing the significant fiscal problems with this “modernized” vision of Medicare. According to the Foundation’s analysis, the average out-of-pocket expense for beneficiaries increase from $5,630 under the current system to $12,500. The reason for this increase, according to the Congressional Budget Office, is that providing coverage is actually more expensive for a private insurer than it is for the government.  The proposal faces other economic challenges as well, and ironically, some of them stem from its close resemblance to Obamacare.

Read More