Tag: The Conversation

This is the dog!

Perhaps the loss of a loved dog explains so much!

The posts for the last two days have carried separate and very different stories of terrible cruelty to dogs, the second one involving terrible cruelty to a dog and a bull! As a tradition! Ouch!!

Readers of this place know what they feel about dogs. It is felt deep within their hearts. Those feelings are poured out when, either from me or someone else, there’s a post lamenting the loss of their dog.

Just as a tiny example of that love we all have for our dogs, here’s a response from Marina Kanavaki and, trust me, Marina is far from being alone in this regard.

Oh, no, Paul!!! I’m so sorry my friend! It is hard to believe and not so long ago, Casey. I know words can’t take away the pain but you have my thoughts and I’m sending you both my love and hugs.

So a recent essay published on The Conversation site is a must to share with you today. As usual, it is republished within the terms of The Conversation.

ooOOoo

Why losing a dog can be harder than losing a relative or friend

March 9, 2017
Frank T. McAndrew,   Cornelia H. Dudley Professor of Psychology, Knox College.

Recently, my wife and I went through one of the more excruciating experiences of our lives – the euthanasia of our beloved dog, Murphy. I remember making eye contact with Murphy moments before she took her last breath – she flashed me a look that was an endearing blend of confusion and the reassurance that everyone was ok because we were both by her side.

When people who have never had a dog see their dog-owning friends mourn the loss of a pet, they probably think it’s all a bit of an overreaction; after all, it’s “just a dog.”

However, those who have loved a dog know the truth: Your own pet is never “just a dog.”

Many times, I’ve had friends guiltily confide to me that they grieved more over the loss of a dog than over the loss of friends or relatives. Research has confirmed that for most people, the loss of a dog is, in almost every way, comparable to the loss of a human loved one. Unfortunately, there’s little in our cultural playbook – no grief rituals, no obituary in the local newspaper, no religious service – to help us get through the loss of a pet, which can make us feel more than a bit embarrassed to show too much public grief over our dead dogs.

Perhaps if people realized just how strong and intense the bond is between people and their dogs, such grief would become more widely accepted. This would greatly help dog owners to integrate the death into their lives and help them move forward.

An interspecies bond like no other

What is it about dogs, exactly, that make humans bond so closely with them?

For starters, dogs have had to adapt to living with humans over the past 10,000 years. And they’ve done it very well: They’re the only animal to have evolved specifically to be our companions and friends. Anthropologist Brian Hare has developed the “Domestication Hypothesis” to explain how dogs morphed from their grey wolf ancestors into the socially skilled animals that we now interact with in very much the same way as we interact with other people.

Perhaps one reason our relationships with dogs can be even more satisfying than our human relationships is that dogs provide us with such unconditional, uncritical positive feedback. (As the old saying goes, “May I become the kind of person that my dog thinks I already am.”)

This is no accident. They have been selectively bred through generations to pay attention to people, and MRI scans show that dog brains respond to praise from their owners just as strongly as they do to food (and for some dogs, praise is an even more effective incentive than food). Dogs recognize people and can learn to interpret human emotional states from facial expression alone. Scientific studies also indicate that dogs can understand human intentions, try to help their owners and even avoid people who don’t cooperate with their owners or treat them well.

Not surprisingly, humans respond positively to such unrequited affection, assistance and loyalty. Just looking at dogs can make people smile. Dog owners score higher on measures of well-being and they are happier, on average, than people who own cats or no pets at all.

Like a member of the family

Our strong attachment to dogs was subtly revealed in a recent study of “misnaming.” Misnaming happens when you call someone by the wrong name, like when parents mistakenly calls one of their kids by a sibling’s name. It turns out that the name of the family dog also gets confused with human family members, indicating that the dog’s name is being pulled from the same cognitive pool that contains other members of the family. (Curiously, the same thing rarely happens with cat names.)

It’s no wonder dog owners miss them so much when they’re gone.

Psychologist Julie Axelrod has pointed out that the loss of a dog is so painful because owners aren’t just losing the pet. It could mean the loss of a source of unconditional love, a primary companion who provides security and comfort, and maybe even a protégé that’s been mentored like a child.

The loss of a dog can also seriously disrupt an owner’s daily routine more profoundly than the loss of most friends and relatives. For owners, their daily schedules – even their vacation plans – can revolve around the needs of their pets. Changes in lifestyle and routine are some of the primary sources of stress.

According to a recent survey, many bereaved pet owners will even mistakenly interpret ambiguous sights and sounds as the movements, pants and whimpers of the deceased pet. This is most likely to happen shortly after the death of the pet, especially among owners who had very high levels of attachment to their pets.

While the death of a dog is horrible, dog owners have become so accustomed to the reassuring and nonjudgmental presence of their canine companions that, more often than not, they’ll eventually get a new one.

So yes, I miss my dog. But I’m sure that I’ll be putting myself through this ordeal again in the years to come.

ooOOoo

Just let the messages of this essay reverberate around your heart. I’ll say no more!

Searching for the Truth

Resolving the falsehoods may not be so straightforward as one thinks.

I’m going straight into this last post of my mini-series looking at the state of things. Namely a recent essay published by Professor Ronald Pies:

Professor of Psychiatry, Lecturer on Bioethics & Humanities at SUNY Upstate Medical University; and Clinical Professor of Psychiatry, Tufts University School of Medicine, Tufts University

I am a psychiatrist and ethicist affiliated with SUNY Upstate Medical University, Syracuse, NY; and Tufts University School of Medicine in Boston. I write on a variety of cross-disciplinary topics, ranging from mental health to philosophy of mind to spirituality. Most recently, I have authored the novella, “The Late Life Bloom of Rose Rabinowitz;” and the poetry chapbook, “The Myeloma Year.”

Here is that post, republished within the terms of The Conversation.

ooOOoo

‘Alternative facts’: A psychiatrist’s guide to twisted relationships to truth

March 1, 2017

The phrase “alternative facts” has recently made the news in a political context, but psychiatrists like me are already intimately acquainted with the concept – indeed, we hear various forms of alternate reality expressed almost every day.

All of us need to parse perceived from actual reality every day, in nearly every aspect of our lives. So how can we sort out claims and beliefs that strike most people as odd, unfounded, fantastical or just plain delusional?

Untruths aren’t always lies

First, we need to make a distinction often emphasized by ethicists and philosophers: that between a lie and a falsehood. Thus, someone who deliberately misrepresents what he or she knows to be true is lying – typically, to secure some personal advantage. In contrast, someone who voices a mistaken claim without any intent to deceive is not lying. That person may simply be unaware of the facts, or may refuse to believe the best available evidence. Rather than lying, he’s stating a falsehood.

Some people who voice falsehoods appear incapable of distinguishing real from unreal, or truth from fiction, yet are sincerely convinced their worldview is absolutely correct. And this is our entree into the psychiatric literature.

In clinical psychiatry, we see patients with a broad spectrum of ideas that many people would find eccentric, exaggerated or blatantly at odds with reality. The clinician’s job is, first, to listen empathically and try to understand these beliefs from the patient’s point of view, carefully taking into account the person’s cultural, ethnic and religious background.

Sometimes, clinicians can be wildly mistaken in their first impressions. A colleague of mine once described a severely agitated patient who was hospitalized because he insisted he was being stalked and harassed by the FBI. A few days into his hospitalization, FBI agents showed up on the unit to arrest the patient. As the old joke goes, just because you’re paranoid doesn’t mean they aren’t after you!

As strongly as she believes, it doesn’t make it true. Talking image via http://www.shutterstock.com.

When what you believe is wrong

We can think of distortions of reality as falling along a continuum, ranging from mild to severe, based on how rigidly the belief is held and how impervious it is to factual information. On the milder end, we have what psychiatrists call over-valued ideas. These are very strongly held convictions that are at odds with what most people in the person’s culture believe, but which are not bizarre, incomprehensible or patently impossible. A passionately held belief that vaccinations cause autism might qualify as an over-valued idea: it’s not scientifically correct, but it’s not utterly beyond the realm of possibility.

On the severe end of the continuum are delusions. These are strongly held, completely inflexible beliefs that are not altered at all by factual information, and which are clearly false or impossible. Importantly, delusions are not explained by the person’s culture, religious beliefs or ethnicity. A patient who inflexibly believes that Vladimir Putin has personally implanted an electrode in his brain in order to control his thoughts would qualify as delusional. When the patient expresses this belief, he or she is not lying or trying to deceive the listener. It is a sincerely held belief, but still a falsehood.

Falsehoods of various kinds can be voiced by people with various neuropsychiatric disorders, but also by those who are perfectly “normal.” Within the range of normal falsehood are so-called false memories, which many of us experience quite often. For example, you are absolutely certain you sent that check to the power company, but in fact, you never did.

As social scientist Julia Shaw observes, false memories “have the same properties as any other memories, and are indistinguishable from memories of events that actually happened.” So when you insist to your spouse, “Of course I paid that electric bill!” you’re not lying – you are merely deceived by your own brain.

A much more serious type of false memory involves a process called confabulation: the spontaneous production of false memories, often of a very detailed nature. Some confabulated memories are mundane; others, quite bizarre. For example, the person may insist – and sincerely believe – that he had eggs Benedict at the Ritz for breakfast, even though this clearly wasn’t the case. Or, the person may insist she was abducted by terrorists and present a fairly elaborate account of the (fictional) ordeal. Confabulation is usually seen in the context of severe brain damage, such as may follow a stroke or the rupture of a blood vessel in the brain.

Lying as a default

Finally, there is falsification that many people would call pathological lying, and which goes by the extravagant scientific name of pseudologia fantastica (PF). Writing in the Psychiatric Annals, Drs. Rama Rao Gogeneni and Thomas Newmark list the following features of PF:

  • A marked tendency to lie, often as a defensive attempt to avoid consequences. The person may experience a “high” from this imaginative story-telling.
  • The lies are quite dazzling or fantastical, though they may contain truthful elements. Often, the lies may capture considerable public attention.
  • The lies tend to present the person in a positive light, and may be an expression of an underlying character trait, such as pathological narcissism. However, the lies in PF usually go beyond the more “believable” stories of persons with narcissistic traits.

Although the precise cause or causes of PF are not known, some data suggest abnormalities in the white matter of the brain – bundles of nerve fibers surrounded by an insulating sheath called myelin. On the other hand, the psychoanalyst Helene Deutsch argued that PF stems from psychological factors, such as the need to enhance one’s self-esteem, secure the admiration of others or to portray oneself as either a hero or a victim.

Who cares about facts anyway?

Of course, all of this presumes something like a consensus on what constitutes “reality” and “facts” and that most people have an interest in establishing the truth. But this presumption is looking increasingly doubtful, in the midst of what has come to be known as the “post-truth era.” Charles Lewis, the founder of the Center for Public Integrity, described ours as a period in which “up is down and down is up and everything is in question and nothing is real.”

Are lies becoming our rose-colored glasses? Christian Bucad, CC BY-NC-ND

Even more worrisome, the general public seems to have an appetite for falsehood. As writer Adam Kirsch recently argued, “more and more, people seem to want to be lied to.” The lie, Kirsch argues, is seductive: “It allows the liar and his audience to cooperate in changing the nature of reality itself, in a way that can appear almost magical.”

And when this magical transformation of reality occurs, whether in a political or scientific context, it becomes very difficult to reverse. As the writer Jonathan Swift put it, “Falsehood flies, and the Truth comes limping after it.”

Psychiatrists are not in a position to comment on the mental health of public figures they have not personally evaluated or on the nature of falsehoods sometimes voiced by our political leaders. Indeed, the “Goldwater Rule” prohibits us from doing so. Nevertheless, psychiatrists are keenly aware of the all-too-human need to avoid or distort unpleasant truths. Many would likely nod in agreement with an observation often attributed to the psychoanalyst Carl Jung: “People cannot stand too much reality.”

ooOOoo

With Carl Jung’s words echoing in one’s mind the reaction that does come to me and, undoubtedly, to many others, is that the time for limiting what degree of reality we can take on board is rapidly coming to a close.

Or so much more elegantly conveyed by Maya Angelou.

Back to more gentle and soft ideas tomorrow – and that’s the Truth!

Who pulls the strings?

Or, more specifically, do we believe we have free will?

One of the endless benefits of this wired-up, digital world is how easy it is to have one’s mind opened and stretched a little.

Take this, for instance, as an intriguing start to a new day.

Do we have free will?

This isn’t a question I can answer, but what I am interested in is “what happens if we do (or do not) believe in free will?” In other words, does believing in free will matter in your daily life?

Just let one’s mind float around that idea, not only as it applies to us humans but also to the animals that share our human intuition, such as dogs and horses.

So what’s got me bubbling along today? Nothing less than an article that appeared on The Conversation blog-site back last September.

I found it fascinating and hope you do as well. It is republished within the terms of The Conversation site.

ooOOoo

Believing in free will makes you feel more like your true self

September 1, 2016

By Elizabeth Seto, Ph.D. Candidate in Social and Personality Psychology, Texas A&M University .

image-20160830-28235-1xkam7s
Believing in free will makes us feel more like ourselves. Man walking via http://www.shutterstock.com

Do we have free will? This is a question that scholars have debated for centuries and will probably continue to debate for centuries to come.

This isn’t a question I can answer, but what I am interested in is “what happens if we do (or do not) believe in free will?” In other words, does believing in free will matter in your daily life?

My colleagues and I at the Existential Psychology Lab at Texas A&M University study the psychological outcomes of belief in free will. While contemplating my next research project, I realized at some point in our lives, we all want to understand who we are – it’s human nature. So, we decided to explore how believing in free will influences our sense of self and identity.

 One way or another? Feet image via www.shutterstock.com.
One way or another? Feet image via http://www.shutterstock.com.

What is free will?

Free will is generally understood as the ability to freely choose our own actions and determine our own outcomes. For example, when you wake up in the morning, do you hit snooze? Do you put on your workout gear and go for a run? Do you grab a hot cup of coffee? While those are simple examples, if you believe in free will, you believe there are a limitless number of actions you can engage in when you wake up in the morning, and they are all within your control.

Believing in free will helps people exert control over their actions. This is particularly important in helping people make better decisions and behave more virtuously.

For instance, research has found that promoting the idea that a person doesn’t have free will makes people become more dishonest, behave aggressively and even conform to others’ thoughts and opinions. And how can we hold people morally responsible for their actions if we don’t believe they have the free will to act any differently? Belief in free will allows us to punish people for their immoral behaviors.

So, not only is there a value to believing in free will, but those beliefs have profound effects on our thoughts and behaviors. It stands to reason that believing in free will influences how we perceive ourselves.

You might be thinking, “Of course believing in free will influences how I feel about myself.” Even though this seems obvious, surprisingly little research has examined this question. So, I conducted two studies to suss out more about how believing in free will makes us feel.

What believing in free will makes us feel about ourselves

In the first study, I recruited 304 participants from Amazon Mechanical Turk and randomly assigned them to write about either personal experiences reflecting a high belief in free will, like changing career paths or resisting drugs or alcohol, or experiences reflecting a low belief in free will, such as growing up in poverty or working under an authoritative boss. Then, they were all asked to evaluate their sense of self.

Participants who wrote about experiences reflecting low belief in free will reported feeling less “in touch” with their true selves. In other words, they felt like they did not know themselves as well as the participants who wrote about experiences reflecting high belief in free will.

Then, I conducted a follow-up study testing one’s sense of authenticity, the feeling that one is behaving according to their own beliefs, desires and values.

I recruited another group of participants from Amazon Mechnical Turk, and like the first experiment, randomly assigned them to write about personal experiences demonstrating high belief in free will or low belief in free will. Then, they all completed a decision-making task where they had to make a series of choices about whether to donate money to charity or to keep the money for themselves.

Afterwards, participants were asked how authentic they felt while making their decisions. Participants in the low free will group reported feeling less authentic than participants in the high free will group.

 Up and at it. Female runner image via www.shutterstock.com.
Up and at it. Female runner image via http://www.shutterstock.com.

So, what does this all mean?

Ultimately, when people feel they have little control over their actions and outcomes in life, they feel more distant from their true, authentic selves. They are less in touch with who they are and do not believe their actions reflect their core beliefs and values.

We believe this is because belief in free will is linked to feelings of agency, the sense that we are the authors of our actions and are actively engaged with the world. As you can imagine, this sense of agency is an important part of a person’s identity.

The importance of feeling like you are in charge of your life applies to significant actions like moving or getting a new job or pondering the big questions in life. But it also applies to the minor decisions we make throughout the day.

Here’s one simple, though relatable, decision I am faced with every morning. When I wake up in the morning and decide to put on my workout gear and go for a run instead of hitting snooze, I might feel like I am the primary decision-maker for this morning routine. Additionally, I am most likely acting on the part of me that values physical health.

But what if I wake up, and I feel like I can’t exercise because I have to go to work or some other external factor is making it difficult to go? I might feel as if someone or something else is controlling my behavior, and perhaps, less like my true self.

So, do you have free will? Do any of us? Remember, the question isn’t whether it exists or not, but whether you believe it does.

ooOOoo

Now thinking of dogs having their own free will might seem a little bizarre, but I do not intend it to be seen as such. Many of you will have dogs (and horses) that have ‘minds of their own’.

For our family here at home, if there’s one of our dogs that exhibits free will it is our Brandy.

Our Brandy is a Pyrenean Mastiff!
Our Brandy is a Pyrenean Mastiff!

Without warning or any other indication, he will suddenly decide it is time to go ‘walk-about’. Mainly during the day but sometimes at night, whatever the weather, he will disappear. He will always return but can be wandering around our thirteen acres for up to an hour.

Does he have free will?

Does he believe he has free will?

Do we believe he has free will?

What, dear reader, do you think?

Economic marginalisation.

For those looking for answers to the crisis in liberal democracy, this may well be it.

In yesterday’s post Tensions abound in many societies I offered a viewpoint that the ‘left’ arguing with the ‘right’ in politics was utterly inappropriate. Simply for we, as in the people who live on this planet, have to start working together if we wish to have a future for mankind on Planet Earth.

Yesterday’s post also referred to Inductive and Deductive Reasoning with me proposing that the future had to be built on a universally acknowledged relationship between ’cause’ and ‘effect’. A relationship that was built on a clear axiom, or theorem; as we see all around us in both the physical and natural worlds.

This idea does take a little time to filter through and I would be the first to say that I had to spend quite a while reflecting on the idea to fully understand the difference, the power, of deductive reasoning. Plus how something that was a behaviourial ‘law’ could be seen as much as an axiom as is, for example, the calculation of the speed of light, or the relationship of gravity to mass.

So returning to economics.

Quite recently there was an essay published on The Conversation blogsite written by Professor Andrew Cumbers of the University of Glasgow.

His thesis is that there is a direct relationship between “… about how well dispersed economic decision-making power is and how much control and financial security people have over their lives.

That relationship is the core message of his essay.

In other words, as I see it, there is an axiom, a theorem, that governs the relationship between the leadership process of a country and the degree to which that country’s society could be classed as a democratic society.

Here is Professor Cumbers’s essay as published by The Conversation blogsite and republished here within the terms of The Conversation.

ooOOoo

New index of economic marginalisation helps explain Trump, Brexit and alt.right

January 12, 2017 10.03am EST

Author:
image-20170111-4585-12s1o8d
“My fellow disenfranchised Americans …” EPA

If 2016 brought Brexit, Donald Trump and a backlash against cosmopolitan visions of globalisation and society, the great fear for 2017 is further shocks from right-wing populists like Geert Wilders in Holland and Marine Le Pen in France. A new mood of intolerance, xenophobia and protectionist economics seems to be in the air.

In a world of zero-hour contracts, Uber, Deliveroo and the gig economy, access to decent work and a sustainable family income remains the main fault line between the winners and losers from globalisation. Drill into the voter data behind Brexit and Trump and they have much to do with economically marginalised voters in old industrial areas, from South Wales to Nord-Pas-de-Calais, from Tyneside to Ohio and Michigan.

These voters’ economic concerns about industrial closures, immigrants and businesses decamping to low-wage countries seemed ignored by a liberal elite espousing free trade, flexible labour and deregulation. They turned instead to populist “outsiders” with simplistic yet ultimately flawed political and economic narratives.

Much has been said about the crisis of liberal political democracy, but these trends look inextricably linked with what is sometimes referred to as economic democracy. This is about how well dispersed economic decision-making power is and how much control and financial security people have over their lives. I’ve been involved in a project to look at how this compares between different countries. The results say much about the point we have reached, and where we might be heading in future.

The index

Our economic democracy index looked at 32 countries in the OECD (omitting Turkey and Mexico, which had too much missing data). While economic democracy tends to focus on levels of trade union influence and the extent of cooperative ownership in a country, we wanted to take in other relevant factors.

We added three additional indicators: “workplace and employment rights”; “distribution of economic decision-making powers”, including everything from the strength of the financial sector to the extent to which tax powers are centralised; and “transparency and democratic engagement in macroeconomic decision-making”, which takes in corruption, accountability, central bank transparency and different social partners’ involvement in shaping policy.

What is striking is the basic difference between a more “social” model of northern European capitalism and the more market-driven Anglo-American model. Hence the Scandinavian countries score among the best, with their higher levels of social protection, employment rights and democratic participation in economic decision-making. The reverse is true of the more deregulated, concentrated and less democratic economies of the English-speaking world. The US ranks particularly low, with only Slovakia below it. The UK too is only 25th out of 32.

 Economic Democracy Index, figures from 2013. Andrew Cumbers
Economic Democracy Index, figures from 2013. Andrew Cumbers

Interestingly, France ranks relatively highly. This reflects its strong levels of job protection and employee involvement in corporate decision-making – the fact that the far right has been strong in France for a number of years indicates its popularity stems from race at least as much as economics.

Yet leading mainstream presidential candidates François Fillon and Emmanuel Macron are committed to reducing France’s protections. These are often blamed – without much real evidence – for the country’s sluggish job creation record. There is a clear danger both here and in the Netherlands that a continuing commitment to such neoliberal labour market policies might push working class voters further towards Le Pen and Wilders.

One other notable disparity in the index is between the scores of Austria and Germany, despite their relatively similar economic governance. Germany’s lower ranking reflects the growth of labour market insecurity and lower levels of job protection, particularly for part-time workers as part of the Hartz IV labour market reforms in the 1990s that followed reunification.

The index also highlights the comparatively poor levels of economic democracy in the “transition” economies of eastern Europe. The one very interesting exception is Slovenia, which merits further study. It might reflect both its relatively stable transition from communism and the civil war in the former Yugoslavia, and the continuing presence of active civil society elements in the trade union and cooperative movements. Southern European economies also tend to rank below northern European countries, as does Japan.

Poverty and inequality

The index provides strong evidence that xenophobic politics may be linked to changing levels of economic participation and empowerment – notwithstanding the French data. We found that the greater the poverty and inequality in a country, the lower the rates of economic democracy.

These findings suggest, for example, that the Anglo-American-led attack on trade unions and flexible labour policies may actually drive up poverty and inequality by cutting welfare benefits and driving up individual employment insecurity. While the OECD itself advocated these policies until recently, countries with high levels of economic democracy such as Norway, Denmark and Iceland have much lower levels of poverty than countries such as the US and UK.

 Far right activists in Budapest, Hungary, February 2016. EPA
Far right activists in Budapest, Hungary, February 2016. EPA

Far-right populism is on the march everywhere, including the Nordic countries. But Brexit, Trump and the more serious shift to the far right in Eastern Europe have been accompanied by diminishing economic security and rights at work, disenfranchised trade unions and cooperatives, and economic decision-making concentrated among financial, political and corporate elites.

We will monitor these scores in future to see what happens over time. It will be interesting to see how the correlations between economic democracy, poverty and voting patterns develop in the coming years. For those looking for answers to the crisis in liberal democracy, this may well be it.

ooOOoo

 I shall be writing to Professor Cumbers asking if my analysis of that relationship is supported by his research.

For if it is then we do have a very clear axiom that few would disagree with. That is the political consensus this world needs now.

Oh, and we will be back to dogs tomorrow! 😉

The rights and wrongs of hunting!

The philosophy of hunting in terms of it being ‘right’ or ‘wrong’.

Anyone who comes here for more than a couple of visits will know that both Jean and I are opposed to hunting completely. Period!

That’s not surprising as there have been a number of posts over the years describing how we feed the wild deer. Here’s three more photographs that haven’t previously been shared with you.

p1140238oooo

p1160189oooo

p1150179But, of course, the opinions of Jean and me are not, and should not be, the rule for the wider population of this part of Oregon.

All I would ask is that there is a proper, mature discussion as to the pros and cons of hunting wild animals in this, the twenty-first century.

All of which leads me to a recent essay posted on The Conversation site and republished here within the terms of that site.

ooOOoo

Is hunting moral? A philosopher unpacks the question

January 4, 2017 8.37pm EST

by
Three generations of a Wisconsin family with a nine-point buck. Wisconsin Department of Natural Resources/Flickr, CC BY-ND
Three generations of a Wisconsin family with a nine-point buck. Wisconsin Department of Natural Resources/Flickr, CC BY-ND

Every year as daylight dwindles and trees go bare, debates arise over the morality of hunting. Hunters see the act of stalking and killing deer, ducks, moose and other quarry as humane, necessary and natural, and thus as ethical. Critics respond that hunting is a cruel and useless act that one should be ashamed to carry out.

As a nonhunter, I cannot say anything about what it feels like to shoot or trap an animal. But as a student of philosophy and ethics, I think philosophy can help us clarify, systematize and evaluate the arguments on both sides. And a better sense of the arguments can help us talk to people with whom we disagree.

Three rationales for hunting

One central question is why people choose to hunt. Environmental philosopher Gary Varner identifies three types of hunting: therapeutic, subsistence and sport. Each type is distinguished by the purpose it is meant to serve.

Therapeutic hunting involves intentionally killing wild animals in order to conserve another species or an entire ecosystem. In one example, Project Isabella, conservation groups hired marksmen to eradicate thousands of feral goats from several Galapagos islands between 1997 and 2006. The goats were overgrazing the islands, threatening the survival of endangered Galapagos tortoises and other species.

Subsistence hunting is intentionally killing wild animals to supply nourishment and material resources for humans. Agreements that allow Native American tribes to hunt whales are justified, in part, by the subsistence value the animals have for the people who hunt them.

 Crawford Patkotak, center, leads a prayer after his crew landed a bowhead whale near Barrow, Alaska. Both revered and hunted by the Inupiat, the bowhead whale serves a symbol of tradition, as well as a staple of food. AP Photo/Gregory Bull
Crawford Patkotak, center, leads a prayer after his crew landed a bowhead whale near Barrow, Alaska. Both revered and hunted by the Inupiat, the bowhead whale serves a symbol of tradition, as well as a staple of food. AP Photo/Gregory Bull

In contrast, sport hunting refers to intentionally killing wild animals for enjoyment or fulfillment. Hunters who go after deer because they find the experience exhilarating, or because they want antlers to mount on the wall, are sport hunters.

These categories are not mutually exclusive. A hunter who stalks deer because he or she enjoys the experience and wants decorative antlers may also intend to consume the meat, make pants from the hide and help control local deer populations. The distinctions matter because objections to hunting can change depending on the type of hunting.

What bothers people about hunting: Harm, necessity and character

Critics often argue that hunting is immoral because it requires intentionally inflicting harm on innocent creatures. Even people who are not comfortable extending legal rights to beasts should acknowledge that many animals are sentient – that is, they have the capacity to suffer. If it is wrong to inflict unwanted pain and death on a sentient being, then it is wrong to hunt. I call this position “the objection from harm.”

If sound, the objection from harm would require advocates to oppose all three types of hunting, unless it can be shown that greater harm will befall the animal in question if it is not hunted – for example, if it will be doomed to slow winter starvation. Whether a hunter’s goal is a healthy ecosystem, a nutritious dinner or a personally fulfilling experience, the hunted animal experiences the same harm.

But if inflicting unwanted harm is necessarily wrong, then the source of the harm is irrelevant. Logically, anyone who commits to this position should also oppose predation among animals. When a lion kills a gazelle, it causes as much unwanted harm to the gazelle as any hunter would – far more, in fact.

 Lions attack a water buffalo in Tanzania. Oliver Dodd/Wikipedia, CC BY
Lions attack a water buffalo in Tanzania. Oliver Dodd/Wikipedia, CC BY

Few people are willing to go this far. Instead, many critics propose what I call the “objection from unnecessary harm”: it is bad when a hunter shoots a lion, but not when a lion mauls a gazelle, because the lion needs to kill to survive.

Today it is hard to argue that human hunting is strictly necessary in the same way that hunting is necessary for animals. The objection from necessary harm holds that hunting is morally permissible only if it is necessary for the hunter’s survival. “Necessary” could refer to nutritional or ecological need, which would provide moral cover for subsistence and therapeutic hunting. But sport hunting, almost by definition, cannot be defended this way.

Sport hunting also is vulnerable to another critique that I call “the objection from character.” This argument holds that an act is contemptible not only because of the harm it produces, but because of what it reveals about the actor. Many observers find the derivation of pleasure from hunting to be morally repugnant.

In 2015, American dentist Walter Palmer found this out after his African trophy hunt resulted in the death of Cecil the lion. Killing Cecil did no significant ecological damage, and even without human intervention, only one in eight male lions survives to adulthood. It would seem that disgust with Palmer was at least as much a reaction to the person he was perceived to be – someone who pays money to kill majestic creatures – as to the harm he had done.

The hunters I know don’t put much stock in “the objection from character.” First, they point out that one can kill without having hunted and hunt without having killed. Indeed, some unlucky hunters go season after season without taking an animal. Second, they tell me that when a kill does occur, they feel a somber union with and respect for the natural world, not pleasure. Nonetheless, on some level the sport hunter enjoys the experience, and this is the heart of the objection.

Is hunting natural?

In discussions about the morality of hunting, someone inevitably asserts that hunting is a natural activity since all preindustrial human societies engage in it to some degree, and therefore hunting can’t be immoral. But the concept of naturalness is unhelpful and ultimately irrelevant.

A very old moral idea, dating back to the Stoics of ancient Greece, urges us to strive to live in accordance with nature and do that which is natural. Belief in a connection between goodness and naturalness persists today in our use of the word “natural” to market products and lifestyles – often in highly misleading ways. Things that are natural are supposed to be good for us, but also morally good.

Setting aside the challenge of defining “nature” and “natural,” it is dangerous to assume that a thing is virtuous or morally permissible just because it is natural. HIV, earthquakes, Alzheimer’s disease and post-partum depression are all natural. And as The Onion has satirically noted, behaviors including rape, infanticide and the policy of might-makes-right are all present in the natural world.

Head-Smashed-In Buffalo Jump, a UNESCO World Heritage Site in Alberta, Canada, commemorates a place where indigenous peoples of the North American Plains killed buffalo for more than 6,000 years by driving them over a cliff.

Hard conversations

There are many other moral questions associated with hunting. Does it matter whether hunters use bullets, arrows or snares? Is preserving a cultural tradition enough to justify hunting? And is it possible to oppose hunting while still eating farm-raised meat?

As a starting point, though, if you find yourself having one of these debates, first identify what kind of hunting you’re discussing. If your interlocutor objects to hunting, try to discover the basis for their objection. And I believe you should keep nature out of it.

Finally, try to argue with someone who takes a fundamentally different view. Confirmation bias – the unintentional act of confirming the beliefs we already have – is hard to overcome. The only antidote I know of is rational discourse with people whose confirmation bias runs contrary to my own.

ooOOoo

This is a very important essay from Joshua. Well done, that man!

I will just leave you all with this further image.

Two young stags keeping it together. (xxx)
Two young stags keeping it together. (Taken here at home in July, 2016.)

Best wishes to each of you; irrespective of your view on hunting!

Brexit – now what happens?

Here’s what is going to happen.

In the run-up to the EU referendum by the UK this Brit was tempted several times to offer an opinion on what I thought was the best decision. But I resisted. (I was qualified to vote as an overseas voter and had voted for Remain.)

My resistance was because it seemed inappropriate to pass any form of opinion before the die had been cast, so to speak. I hadn’t been living in the country for over eight years and, inevitably, was out of touch with feelings.

The Conversation blogsite yesterday had a series of articles on the aftermath of the Brexit decision but the one that seemed most useful to share with you all was an article by Gavin Barrett,  a Professor of European Constitutional and Economic Law at University College in Dublin. For many readers, including me, both within and without the UK this seemed a valuable primer.

ooOOoo

Britain votes to leave the EU, Cameron quits – here’s what happens next

June 23, 2016 11.41pm EDT

image-20160624-30267-wvhvty
Leave ahead. Anthony Devlin / PA Wire

Author

The magic of touch!

At all levels and in so many ways it is life-giving.

dt14Animals must see touch as a natural way of living. We humans are less natural about touch especially with people that we don’t know so well. Not everyone, of course, but as a general statement it is probably not wrong.

The topic of touch has come to me today as a result of a recent item read over on The Conversation blogsite; specifically about the importance of touch between a doctor and his or her patient. Here it is republished within the terms of The Conversation:

ooOOoo

Touch creates a healing bond in health care

May 23, 2016 8.23pm EDT

Touch is a powerful tool in medicine. Hands via www.shutterstock.com
Touch is a powerful tool in medicine. Hands via http://www.shutterstock.com

In contemporary health care, touch – contact between a doctor’s hand and a patient – appears to be on its way out. The expanding role of CT and MRI imaging is decreasing reliance on touch as a way of making diagnoses. Pressures to move patients through the system more quickly leave health professionals with fewer opportunities to make contact. Our experience suggests that when doctors spend fewer minutes with patients, less time is available for touch.

Yet despite the rise of scanners, robots and other new medical technologies, the physician’s hand remains one of medicine’s most valuable diagnostic tools. Touch creates a human bond that is particularly needed in this increasingly hands-off, impersonal age. Medical practice is replete with situations where touch does more than any words to comfort and reassure.

The USC psychologist Leo Buscaglia, whose habit of hugging those he met soon earned him the sobriquet “Doctor Love,” bemoaned our neglect of touch in his book, “Love,” in these terms:

Too often we underestimate the power of a touch, a smile, a kind word, a listening ear, an honest compliment, or the smallest act of caring, all of which have the potential to turn a life around.

For thousands of years, touch has been recognized as an essential part of the healing arts. Native American healers relied on touch to draw out sickness, and kings and queens were long believed to possess the “Royal Touch,” through which the mere laying on of hands could heal. The Bible contains numerous stories of the healing power of touch.

Touch is an essential part of our well-being

An indication of our need for touch can be found among our primate relatives. Psychologists have observed that many such species spend upwards of five hours of each day touching one another, partly through grooming. For many human beings, however, the daily dose of touching would be measured not in hours but minutes, perhaps even seconds.

Lack of touch can be hazardous to health. In experiments with primates some 60 years ago,

 A young mother participates in a ‘Kangaroo Mother’ program at the National Maternity Hospital in El Salvador. Luis Galdamez/Reuters
A young mother participates in a ‘Kangaroo Mother’ program at the National Maternity Hospital in El Salvador. Luis Galdamez/Reuters

researcher Harry Harlow demonstrated that young monkeys deprived of touch did not grow and develop normally. Mere food, water and shelter are not sufficient – to thrive, such creatures need to touch and be touched.

The same can be said for human beings. During the 20th century, wars landed many babies in orphanages, where their caretakers observed that no matter how well the infants were fed, they would fail to thrive unless they were held and cuddled on a frequent basis. Touch offers no vitamins or calories, yet it plays a vital role in sustaining life.

More recent studies have corroborated these findings. “Kangaroo care,” using papoose-like garments to keep babies close to their mothers, decreases the rate at which they develop blood infections. Touching also improves weight gain and decreases the amount of time that newborns need to remain in the hospital.

Touch creates a bond between doctor and patient

Novelist and physician Abraham Verghese has argued that touching is one of the most important features of the patient-physician interaction. When he examines a patient, he is not merely collecting information with which to formulate a diagnosis, but also establishing a bond that provides comfort and reassurance.

The notion that touch can reassure and comfort has a scientific basis. Ten years ago researchers used MRI scans to look at the brains of women undergoing painful stimuli. When subjects experience pain, certain areas of the brain tend to “light up.” The researchers studied subjects when they were alone, when they were holding a stranger’s hand, and when they were holding their husband’s hand.

They found the highest levels of pain activation when the women were alone. When they were holding a stranger’s hand, the pain response was decreased. And levels of activation were lowest of all when they were holding their husband’s hand. Interesting, the higher the quality of subjects’ marriages, the more pain responses were blunted.

Touch from parents helps kids in intensive care

We have been studying this phenomenon in our own institution, looking at the effect of touch not only on patients but on the parents of patients admitted to the pediatric intensive care unit.

The project, called ROSE (Reach Out, Soothe, and Embrace), sought to determine whether increasing opportunities to touch patients could promote parent well-being without compromising patient safety.

Instead of merely determining whether patients could be taken off the ventilator or fed, we also identified patients who could be safely touched and even held in their parents’ arms. When a patient was deemed safe to hold, a magnet bearing the image of a red rose embraced by two hands was placed on the door to the patient’s room.

While we are still analyzing the results and further study is needed to fully delineate the health benefits of touch, several findings are already clear.

First, increasing opportunities for touch does not compromise patient safety. Second, the subjective well-being of family members is enhanced when touching is encouraged. Third, promoting touch empowers family members to become more involved in their child’s care.

To be sure, inappropriate and unsafe touching can be harmful. But when touch is encouraged in the right ways and for the right reasons, it is good for patients, family, friends and health professionals alike. Touch is one of the most fundamental and effective ways to create a sense of connection and community among human beings.

In the words of the 20th-century theologian Henri Nouwen, who wrote in his book, “Out of Solitude”:

When we honestly ask ourselves which person in our lives means the most to us, we often find that it is those who, instead of giving advice, solutions, or cures, have chosen rather to share our pain and touch our wounds with a warm and tender hand.

So next time you find yourself confronted by a person in distress, remember the power of touch. Medicines and words both have healing power, but so does touch, and it is perhaps the most widely available, financially responsible and safest tool in the healing arts. When we touch, we connect, and when we connect, we create a healing bond for which there is simply no substitute.

ooOOoo

“When we touch, we connect, and when we connect, we create a healing bond for which there is simply no substitute.”

P1140965
Jean with my mother back in July, 2014.

P1150928The healing touch!

Or to repeat the elegant words of Leo Buscaglia:

Too often we underestimate the power of a touch, a smile, a kind word, a listening ear, an honest compliment, or the smallest act of caring, all of which have the potential to turn a life around.

3175758.largeWho have you given a hug today?

Reflections on the internet.

An interesting item that recently crossed my ‘screen’.

I make no apologies for cutting corners for today’s post. Because the last few days of looking after, and worrying about, Hazel have soaked up so much of our time and energy that I just couldn’t find the creative impulse to do much more than ‘copy and paste’.

That’s not to downplay the great interest of this article that appeared over on The Conversation blogsite a few days ago.

ooOOoo

Why the Internet isn’t making us smarter – and how to fight back

April 15, 2016 5.58am EDT

Professor of Psychology, University of Michigan

Disclosure statement: David Dunning has received funding from the National Science Foundation, the National Institutes of Health, and the Templeton Foundation in the past.

In the hours since I first sat down to write this piece, my laptop tells me the National Basketball Association has had to deny that it threatened to cancel its 2017 All-Star Game over a new anti-LGBT law in North Carolina – a story repeated by many news sources including the Associated Press. The authenticity of that viral video of a bear chasing a female snowboarder in Japan has been called into question. And, no, Ted Cruz is not married to his third cousin. It’s just one among an onslaught of half-truths and even pants-on-fire lies coming as we rev up for the 2016 American election season.

The longer I study human psychology, the more impressed I am with the rich tapestry of knowledge each of us owns. We each have a brainy weave of facts, figures, rules and stories that allows us to address an astonishing range of everyday challenges. Contemporary research celebrates just how vast, organized, interconnected and durable that knowledge base is.

That’s the good news. The bad news is that our brains overdo it. Not only do they store helpful and essential information, they are also receptive to false belief and misinformation.

Just in biology alone, many people believe that spinach is a good source of iron (sorry, Popeye), that we use less than 10 percent of our brains (no, it’s too energy-guzzling to allow that), and that some people suffer hypersensitivity to electromagnetic radiation (for which there is no scientific evidence).

But here’s the more concerning news. Our access to information, both good and bad, has only increased as our fingertips have gotten into the act. With computer keyboards and smartphones, we now have access to an Internet containing a vast store of information much bigger than any individual brain can carry – and that’s not always a good thing.

Better access doesn’t mean better information

This access to the Internet’s far reaches should permit us to be smarter and better informed. People certainly assume it. A recent Yale study showed that Internet access causes people to hold inflated, illusory impressions of just how smart and well-informed they are.

But there’s a twofold problem with the Internet that compromises its limitless promise.

First, just like our brains, it is receptive to misinformation. In fact, the World Economic Forum lists “massive digital misinformation” as a main threat to society. A survey of 50 “weight loss” websites found that only three provided sound diet advice. Another of roughly 150 YouTube videos about vaccination found that only half explicitly supported the procedure.

Rumor-mongers, politicians, vested interests, a sensationalizing media and people with intellectual axes to grind all inject false information into the Internet.

So do a lot of well-intentioned but misinformed people. In fact, a study published in the January 2016 Proceedings of National Academy of Science documented just how quickly dubious conspiracy theories spread across the Internet. Specifically, the researchers compared how quickly these rumors spread across Facebook relative to stories on scientific discoveries. Both conspiracy theories and scientific news spread quickly, with the majority of diffusion via Facebook for both types of stories happening within a day.

Making matters worse, misinformation is hard to distinguish from accurate fact. It often has the exact look and feel as the truth. In a series of studies Elanor Williams, Justin Kruger and I published in the Journal of Personality and Social Psychology in 2013, we asked students to solve problems in intuitive physics, logic and finance. Those who consistently relied on false facts or principles – and thus gave the exact same wrong answer to every problem – expressed just as much confidence in their conclusions as those who answered every single problem right.

For example, those who always thought a ball would continue to follow a curved path after rolling out of a bent tube (not true) were virtually as certain as people who knew the right answer (the ball follows a straight path).

Defend yourself

So, how so we separate Internet truth from the false?

First, don’t assume misinformation is obviously distinguishable from true information. Be careful. If the matter is important, perhaps you can start your search with the Internet; just don’t end there. Consult and consider other sources of authority. There is a reason why your doctor suffered medical school, why your financial advisor studied to gain that license.

Second, don’t do what conspiracy theorists did in the Facebook study. They readily spread stories that already fit their worldview. As such, they practiced confirmation bias, giving credence to evidence supporting what they already believed. As a consequence, the conspiracy theories they endorsed burrowed themselves into like-minded Facebook communities who rarely questioned their authenticity.

Instead, be a skeptic. Psychological research shows that groups designating one or two of its members to play devil’s advocates – questioning whatever conclusion the group is leaning toward – make for better-reasoned decisions of greater quality.

If no one else is around, it pays to be your own devil’s advocate. Don’t just believe what the Internet has to say; question it. Practice a disconfirmation bias. If you’re looking up medical information about a health problem, don’t stop at the first diagnosis that looks right. Search for alternative possibilities.

Seeking evidence to the contrary

In addition, look for ways in which that diagnosis might be wrong. Research shows that “considering the opposite” – actively asking how a conclusion might be wrong – is a valuable exercise for reducing unwarranted faith in a conclusion.

After all, you should listen to Mark Twain, who, according to a dozen different websites, warned us, “Be careful about reading health books. You may die of a misprint.”

Wise words, except a little more investigation reveals more detailed and researched sources with evidence that it wasn’t Mark Twain, but German physician Markus Herz who said them. I’m not surprised; in my Internet experience, I’ve learned to be wary of Twain quotes (Will Rogers, too). He was a brilliant wit, but he gets much too much credit for quotable quips.

Misinformation and true information often look awfully alike. The key to an informed life may not require gathering information as much as it does challenging the ideas you already have or have recently encountered. This may be an unpleasant task, and an unending one, but it is the best way to ensure that your brainy intellectual tapestry sports only true colors.

ooOOoo

The way the world now communicates, for good and bad, using the internet is staggering. As the website Internet Live Stats reveals: (as of this moment today)

3,352,197,085 Internet Users in the world

1,016,623,500 Total number of Websites

2,060,120 Blog posts written today

So with that last figure in mind, I’ll send this for posting without delay! 😉

Hugging trees this weekend.

“The Spring is sprung, the grass is riz!”

Our delivery of trees arrived yesterday from the Arbor Day Foundation and that means that much of today will be spent in getting those trees planted.

Plus the recent wet spell has stopped me taking that first cut of the grass from around the house. So there’s another task for this relatively decent weekend coming up. And the vegetable garden needs some attention. And so on!

All of which is my way of saying that I won’t be paying my normal level of attention to Learning from Dogs for the next few days.

Rather aptly comes this item that was recently published over on The Conversation and is republished here within their kind terms.

ooOOoo

Hug a tree – the evidence shows it really will make you feel better

March 18, 2014.

The arrow of time.

Everything, eventually, falls into decay.

What is deeply fascinating, at a number of levels, is how time only goes one way. At every single level of our experience, from the scale of the universe down to the tiniest particle known to science, it all flows forward. The arrow of time!

I was reminded of this interesting question of time in a book that has been published by a local Oregon author, John Taylor Our Curious UNIVERSE (the book is not available online otherwise I would have linked to it.)

It got me thinking of age. How we are all aging. How there is nothing that we can do to stop it. How the only thing we can do is to change our relationship with age. That then reminded me of an item that was published on The Conversation site a week ago that I wanted to share with you – share within the terms provided by The Conversation. The article was called It’s time to measure 21st century aging with 21st century tools.

ooOOoo

It’s time to measure 21st century aging with 21st century tools

March 4, 2016

Disclosure statement

The research was conducted in the framework of the European Research Council ERC-2012-AdG 323947-Re-Ageing

Sergei Scherbov receives funding from the European Research Council ERC-2012-AdG 323947-Re-Ageing

oooo

The populations of most countries of the world are aging, prompting a deluge of news stories about slower economic growth, reduced labor force participation, looming pension crises, exploding health care costs and the reduced productivity and cognitive functioning of the elderly.

These stories are dire, in part because the most widely used measure of aging – the old-age dependency ratio, which measures the number of older dependents relative to working-age people – was developed a century ago and implies the consequences of aging will be much worse than they are likely to be. On top of that, this ratio is used in political and economic discussions of topics such as health care costs and the pension burden – things it was not designed to address.

Turning 65 in 2016 doesn’t mean the same thing as hitting 65 in 1916. So instead of relying on the old-age dependency ratio to figure out the impact of aging, we propose using a series of new measures that take changes in life expectancy, labor participation and health spending into account. When you take these new realities into account, the picture looks a lot brighter.

 How facts from the census questionnaire were tabulated into statistics in 1950. The U.S. National Archives/Flickr
How facts from the census questionnaire were tabulated into statistics in 1950. The U.S. National Archives/Flickr

Our tools to measure aging have aged

The most commonly used measure of population aging is the “old-age dependency ratio,” which is the ratio of the number of people 65 years or older to those 20 to 64.

But, since the old-age dependency ratio was introduced in the early 1900s, most countries have experienced a century of rising life expectancy, and further increases are anticipated.

For instance, in 1914, life expectancy at birth in Sweden was 58.2 years (average for both sexes). By 2014, it had risen to 82.2 years. In 1935, when the U.S Social Security Act was signed into law, 65-year-olds were expected to live 12.7 more years, on average. In 2013, 65 year-olds may expect to live 19.5 years more.

But these changes aren’t reflected in the conventional statistics on aging. Nor is the fact that many people don’t just stop working when they turn 65, and that people are staying healthier for longer.

To get a better sense of what population aging really means today, we decided to develop a new set of measures that take these new realities into account to replace the old-age dependency ratio. And instead of one ratio, we created several ratios to evaluate health care costs, labor force participation and pensions.

Who retires at 65 anymore?

One of these new realities is that the number of people working into their late 60’s and beyond is going up. In 1994, 26.8 percent of American men aged 65-69 participated in the labor force. That figure climbed to 36.1 percent in 2014 and is forecast to reach 40 percent by 2024. And the trend is similar for even older men, with 17 percent of those aged 75-79 expected to still be working in a decade, up from just 10 percent in 1994.

Clearly, these older people did not get the message that they were supposed to become old-age dependents when they turned 65.

Depot Supervisor Eric Headley, 74, takes a call on his mobile phone while at work for Pimlico Plumbers in London July 29, 2010. Britain announced plans to scrap the fixed retirement age next year, saying it wanted to give people the chance to work beyond 65, but business leaders warned the move would create serious problems. REUTERS/Suzanne Plunkett (BRITAIN - Tags: POLITICS SOCIETY) - RTR2GUL1
Depot Supervisor Eric Headley, 74, takes a call on his mobile phone while at work for Pimlico Plumbers in London July 29, 2010. Britain announced plans to scrap the fixed retirement age next year, saying it wanted to give people the chance to work beyond 65, but business leaders warned the move would create serious problems. REUTERS/Suzanne Plunkett.

This isn’t unique to the U.S. Rates like these in many countries have been rising. In the U.K., for instance, the labor force participation rate of 65- to 69-year-old men was 24.2 percent in 2014, and in Israel it was 50.2 percent, up from 14.8 percent and 27.4 percent, respectively, in 2000. In part this is because older people now often have better cognitive functioning than their counterparts who were born a decade earlier.

So, instead of assuming that people work only from ages 20 to 64 and become old-age dependents when they hit 65, we have computed “economic dependency ratios” that take into account observations and forecasts of labor force participation rates. This tells us how many adults not in the labor force there are for every adult in the labor force, giving us a more accurate picture than using 65 as a cutoff point. We used forecasts produced by the International Labour Organization to figure this out.

The old-age dependency ratio in the U.S. is forecast to increase by 61 percent from 2013 to 2030. But using our economic dependency ratio, the ratio of adults in the labor force to adults not in the labor force increases by just 3 percent over that period.

Clearly, doom and gloom stories about U.S. workers having to support so many more nonworkers in the future may need to be reconsidered.

Is the health care burden going to be so high?

Another reality is that while health care costs will go up with an older population, they won’t rise as much as traditional forecasts estimate.

Instead of assuming that health care costs rise dramatically on people’s 65th birthdays, as the old-age dependency ratio implicitly does, we have produced an indicator that takes into account the fact that most of the health care costs of the elderly are incurred in their last few years of life. Increasing life expectancy means those final few years happen at ever later ages.

In Japan, for example, when the burden of the health care costs of people aged 65 and up on those 20-64 years old is assessed using only the conventional old-age dependency ratio, that burden is forecast to increase 32 percent from 2013 to 2030. When we compute health care costs based on whether people are in the last few years of their lives, the burden increases only 14 percent.

Pension ages are going up

The last reality we considered concerns pensions.

In most OECD countries, the age at which someone can begin collecting a full public pension is rising. In a number of countries, such as Sweden, Norway and Italy, pension payouts are now explicitly linked to life expectancy.

In Germany, the full pension age will rise from 65 to 67 in 2029. In the U.S., it used to be 65, is now 66 and will soon rise to 67.

Instead of assuming that everyone receives a full public pension at age 65, which is what the old-age dependency ratio implicitly does, we have computed a more realistic ratio, called the pension cost dependency ratio, that incorporates a general relationship between increases in life expectancy and the pension age. The pension cost dependency ratio shows how fast the burden of paying public pensions is likely to grow.

For instance, in Germany, the old-age dependency ratio is forecast to rise by 49 percent from 2013 to 2030, but 65-year-old Germans will not be eligible for a full pension in 2030. Our pension cost dependency ratio increases by 26 percent over the same period. Instead of indicating that younger Germans will have to pay 49 percent more to support pensioners in 2030 compared to what they paid in 2013, taking planned increases in the full pension age into account, we see that the increase is 26 percent.

Pranom Chartyothin, a 72-year-old bus conductor, sells and collects bus tickets in downtown Bangkok, Thailand, February 3, 2016. Such scenes will only become more common in Thailand as its population rapidly ages, unlike its neighbours with more youthful populations. The World Bank estimates the working-age population will shrink by 11 percent by 2040, the fastest contraction among Southeast Asia's developing countries. Thailand's stage of economic development, the rising cost of living and education, and a population waiting longer to get married are among the reasons it is ageing more quickly than its neighbours. An effective contraception programme in the 1970s also played a part, said Sutayut Osornprasop, a human development specialist at the World Bank in Thailand. Picture taken February 3, 2016. REUTERS/Jorge Silva TPX IMAGES OF THE DAY - RTX269SM
Pranom Chartyothin, a 72-year-old bus conductor, sells and collects bus tickets in downtown Bangkok, Thailand, February 3, 2016. Such scenes will only become more common in Thailand as its population rapidly ages, unlike its neighbours with more youthful populations. The World Bank estimates the working-age population will shrink by 11 percent by 2040, the fastest contraction among Southeast Asia’s developing countries. Thailand’s stage of economic development, the rising cost of living and education, and a population waiting longer to get married are among the reasons it is ageing more quickly than its neighbours. An effective contraception programme in the 1970s also played a part, said Sutayut Osornprasop, a human development specialist at the World Bank in Thailand. Picture taken February 3, 2016. REUTERS/Jorge Silva.

Sixty-five just isn’t that old anymore

In addition to this suite of measures focused on particular aspects of population aging, it is also useful to have a general measure of population aging. We call our general measure of population aging the prospective old-age dependency ratio.

People do not suddenly become old-age dependents on their 65th birthdays. From a population perspective, it makes more sense to classify people as being old when they are getting near the end of their lives. Failing to adjust who is categorized as old based on the changing characteristics of people and their longevity can make aging seem faster than it will be.

In our prospective old-age dependency ratio, we define people as old when they are in age groups where the remaining life expectancy is 15 years or less. As life expectancy increases, this threshold of old age increases.

In the U.K., for instance, the conventional old-age dependency ratio is forecast to increase by 33 percent by 2030. But when we allow the old-age threshold to change with increasing life expectancy, the resulting ratio increases by just 13 percent.

Populations are aging in many countries, but the conventional old-age dependency ratio makes the impact seem worse than it will be. Fortunately, better measures that do not exaggerate the effects of aging are now just a click away.

ooOOoo

Yes, we live in interesting times!