Personal Responsibility and the Procrastination Problem

By Nathaniel Counts

We have all been confronted with the procrastination problem in one form or another.  You have a paper due in a month, and you have two options.  You can either work on it a little every day, or you can save it for the last two days and finish it all then.  If you do not procrastinate, you will be happier – the work will feel like less of a burden and you will be less stressed out.  However, one of your primary interests is spending time with your friends.  Your friends are all in class with you, and you do not have other friends.  If you decide not to procrastinate and they procrastinate, your little bit of work every day will mean that you will have to miss out on trips that your friends go on, and when you are available nearer to the paper’s due date, your friends will all be busy.

Your friends all procrastinate.  What do you do?  You procrastinate as well.  In your calculus, the additional stress of having to do the paper in less time is offset by the additional time with friends.

Now re-imagine the scenario with the same closed social group, but the decision is whether or not to do drugs.  If you decide not to do drugs, you will likely live a longer, healthier life, but if your friends decide to do drugs and trade in health for pleasure in the short-term, you are once again presented with the procrastination problem.  If you choose the “responsible” choice, you miss out on activities with your friends now and will be healthy and capable later when your friends are not.  What do you do?

Read More

Toward an Epidemiological Definition of Community

By Nathaniel Counts

With the coordination and additional funding afforded by the National Prevention, Health Promotion, and Public Health Council and the Prevention and Public Health Fund under the Affordable Care Act, scholars may have a unique opportunity to work toward an epidemiological definition of community.  The evaluation and record-keeping components of the different interventions will inevitably lead to a great deal of additional information about individuals, including their beliefs, behaviors, and health, over time.  If one’s behaviors, and in particular the Leading Health Indicators (ten factors chosen by Health and Human Services that contribute to health, including substance abuse, exercise levels, condom use, etc.) and health status are determined in part by social signaling, it may be possible to use this data to determine which individuals seem to be part of a community.  Various environmental, and possibly even genetic, factors could be controlled for to find groups of individuals whose Leading Health Indicators affect one another’s, and whose health statuses are linked.  This grouping would be a functional “community,” a group of people who influence one another, whether they realize it or not. Currently, the notion of community is usually defined geographically – your community are those that are close to you, unless it is a city, in which case your community are those who are nearby of similar socioeconomic class.  This method would allow for greater precision in determining groups that influence another.

A more precise understanding of community would be useful for assessing the impact of interventions, public health or otherwise.  If you can see the initial community structures at the beginning of an intervention, you could target the individual communities for change and see how their Leading Health Indicators and health statuses evolve.  You could also, and more importantly, see how an intervention changes the make-up of a community.  A new basketball program in a local gymnasium will bring together different arrangements of individuals, who may in turn influence one another, joining them into a community and linking their health statuses.  This could determine choices of programs – a youth basketball league will shape communities differently than a family program or an adult program, and conscious choices could be made about how to bin people based on their current risk behaviors.  This type of information could also provide caution to those planning any sort of intervention – any interaction could reshape communities, subtly changing individual’s values and even their health in ways unbeknownst to them and unintended by the intervener.

Economics, Morality, and End of Life Care (Part II)

By Nathaniel Counts

[Part I is here.]

First, let us consider whether opting for end of life care is morally problematic on its own, without reference to its resource costs.  Certainly wanting to live longer on its own is not morally problematic in our society – we do not consider exercise or healthy eating wrong and many tout it as a social good.  On the other hand, wanting to die early, for example through suicide, has traditionally been viewed as a moral wrong and is illegal in many areas.  It similarly seems that there is nothing problematic with wanting even a short amount of additional time, even with arguably compromised quality.  If end of life care involved only pressing a button for an additional day of life, even in severe pain, it is unlikely that anyone would consider the pressing of the button good or bad morally.

It may be that opting for intensive curative end of life care is irrational, even if it is were cost-free.  If palliative and hospice care would have led to greater overall life enjoyment for that time period, regardless of its length, then opting for the intensive treatments would not have been the right choice, even for that individual.  This does not make it immoral however – individuals are generally allowed to make choices that are worse for themselves as long as they do not violate norms, and the pervasiveness of these intensive treatments at end of life may indicate that they are in fact the norm.

Read More

Economics, Morality, and End of Life Care

By Nathaniel Counts

Over a quarter of Medicare spending goes toward a patient’s last six months of life.  This monopolizes limited resources, both in the hospital and in the federal budget.  Much of the blame for this overspending is placed on institutional incentives or medical training for promoting aggressive end of life care, but some would also place the blame on patients or their families, arguing that this behavior is a flaw in our culture.  The argument goes that if people would learn to be less afraid of death, then they would forego this costly life extending care and die peacefully, while allowing these resources to be available for use elsewhere with greater utility.  In this argument, there is potentially a worrying conflation of moral and economic reasoning, which would be problematic if applied in other contexts.

It would be one thing to say that, given a limited pool of resources, a cost-benefit analysis indicates that end of life care is inefficient and quality-adjusted life years across the system would best be maximized if the money was spent elsewhere, and those in need of end of life care and their families will need to adjust their expectations.  But integral to the argument in the first paragraph is that this misallocated spending is the result of a moral failing, perhaps not of the individual but of the society that imbued the person with the preference for aggressive treatment, and that this failing is worth changing, not only because it will save money and make individuals more comfortable with the fact that there are no longer the resources to support end of life care, but because it will provide some moral benefit to those whose values are changed.

My curiosity is how the economic argument (that it would be a better use of resources to spend money elsewhere) informs the moral argument (that it would be better if people accepted their death).  This is peculiar to me because this type of reasoning does not show up consistently throughout health rationing: if a country decides to spend limited resources on HIV prophylactic drugs rather than HIV treatment drugs, no one would argue that it was in any way unreasonable for the HIV positive individuals to want treatment and that they should be more at peace with a terminal illness.

Read More

The Whitehall Studies and Human Rights

By Nathaniel Counts

Professor M. G. Marmot et al. conducted two studies, Whitehall I and Whitehall II, in which they studied morbidity and mortality in the British civil service sector in the 1960s and the 1980s.  British civil servants are under the same plan with the National Health Service, so the studies controlled for access to healthcare.  But what these famous studies found was that morbidity and mortality still correlated with income.  Further research and analysis has concluded that it is job satisfaction and social status more so than income that determines health outcomes.  Does an individual feel like she has control over the work she does?  Is she stressed out a lot?  How does she feel about herself in relationship to those around her?  Does she feel healthy?  Does she like her life?  Those who feel in control of their lives, feel valued by society, and feel good about their health actually end up living longer and healthier lives on average compared to those who don’t share these beliefs.

Deep structural inequalities exist in every society, and social justice groups work toward greater social equality everywhere.  Does the notion that social inequalities are hurting people in a physiological way change the way we feel about the mission of equality?  Is health so fundamentally different that individuals who accept economic inequality might mobilize over health inequality?  It is certainly implicated in the right to a dignified life, a concept underpinning the human rights movement as whole.  It may be though that the social inequalities on their own terms are an equal evil, because the limitations on one’s abilities to pursue her interests are as inimical to human rights as worse health.

Read More

Disabilities and Behavioral Disorders

By Nathaniel Counts

The Americans with Disabilities Act of 1990 (ADA) and related statutes and regulations creates a cause of action that allows children and young adults with disabilities to be able to participate equally in public schools and universities.  “Disability” can include behavioral and other mental health disorders, such as depression, anxiety disorder, obsessive-compulsive disorder, phobias, or conduct disorder, to the extent that it interferes with the child’s ability to thrive at school.  Over the period of any individual’s school-going career, quite a few people might be considered disabled under the law.

Actual prevalence of behavioral disorders is of course very difficult to measure, but the prevalence in young children for serious emotional disturbances, behavioral disorders that substantially impair a child’s ability to participate in school, has been most frequently estimated at between 10% and 20% as of 2006, and the prevalence for behavioral disorders that do not rise to the level of serious emotional disturbances is likely somewhat higher.  For college students in 2012, one study found that about 30% of students reported feeling so depressed within the past 12 months that it was difficult to function and about 20% of the students reported being diagnosed or receiving treatment for some type of mental health disorder within the past 12 months.  Even given these limited statistics, it is evident that a significant percentage of the population is currently affected by, or will be affected at some point in their lifetime by, a behavioral disorder and that a sizable proportion of these individuals would likely benefit from some form of services or accommodation in their schooling.

The prevalence of behavioral disorders begs the question: what if the majority of the population experiences some form of behavioral disorder?

Read More