Category: Writing

Found on Easter Island

Amazing what science can find out.

But while the science is brilliant the social implications are not so good. Read on!

ooOOoo

A billion-dollar drug was found in Easter Island soil – what scientists and companies owe the Indigenous people they studied

The Rapa Nui people are mostly invisible in the origin story of rapamycin. Posnov/Moment via Getty Images

Ted Powers, University of California, Davis

An antibiotic discovered on Easter Island in 1964 sparked a billion-dollar pharmaceutical success story. Yet the history told about this “miracle drug” has completely left out the people and politics that made its discovery possible.

Named after the island’s Indigenous name, Rapa Nui, the drug rapamycin was initially developed as an immunosuppressant to prevent organ transplant rejection and to improve the efficacy of stents to treat coronary artery disease. Its use has since expanded to treat various types of cancer, and researchers are currently exploring its potential to treat diabetes, neurodegenerative diseases and even aging. Indeed, studies raising rapamycin’s promise to extend lifespan or combat age-related diseases seem to be published almost daily. A PubMed search reveals over 59,000 journal articles that mention rapamycin, making it one of the most talked-about drugs in medicine.

Connected hexagonal structures
Chemical structure of rapamycin. Fvasconcellos/Wikimedia Commons

At the heart of rapamycin’s power lies its ability to inhibit a protein called the target of rapamycin kinase, or TOR. This protein acts as a master regulator of cell growth and metabolism. Together with other partner proteins, TOR controls how cells respond to nutrients, stress and environmental signals, thereby influencing major processes such as protein synthesis and immune function. Given its central role in these fundamental cellular activities, it is not surprising that cancer, metabolic disorders and age-related diseases are linked to the malfunction of TOR.

Despite being so ubiquitous in science and medicine, how rapamycin was discovered has remained largely unknown to the public. Many in the field are aware that scientists from the pharmaceutical company Ayerst Research Laboratories isolated the molecule from a soil sample containing the bacterium Streptomyces hydroscopicus in the mid-1970s. What is less well known is that this soil sample was collected as part of a Canadian-led mission to Rapa Nui in 1964, called the Medical Expedition to Easter Island, or METEI.

As a scientist who built my career around the effects of rapamycin on cells, I felt compelled to understand and share the human story underlying its origin. Learning about historian Jacalyn Duffin’s work on METEI completely changed how I and many of my colleagues view our own field.

Unearthing rapamycin’s complex legacy raises important questions about systemic bias in biomedical research and what pharmaceutical companies owe to the Indigenous lands from which they mine their blockbuster discoveries.

History of METEI

The Medical Expedition to Easter Island was the brainchild of a Canadian team comprised of surgeon Stanley Skoryna and bacteriologist Georges Nogrady. Their goal was to study how an isolated population adapted to environmental stress, and they believed the planned construction of an international airport on Easter Island offered a unique opportunity. They presumed that the airport would result in increased outside contact with the island’s population, resulting in changes in their health and wellness.

With funding from the World Health Organization and logistical support from the Royal Canadian Navy, METEI arrived in Rapa Nui in December 1964. Over the course of three months, the team conducted medical examinations on nearly all 1,000 island inhabitants, collecting biological samples and systematically surveying the island’s flora and fauna.

It was as part of these efforts that Nogrady gathered over 200 soil samples, one of which ended up containing the rapamycin-producing Streptomyces strain of bacteria.

It’s important to realize that the expedition’s primary objective was to study the Rapa Nui people as a sort of living laboratory. They encouraged participation through bribery by offering gifts, food and supplies, and through coercion by enlisting a long-serving Franciscan priest on the island to aid in recruitment. While the researchers’ intentions may have been honorable, it is nevertheless an example of scientific colonialism, where a team of white investigators choose to study a group of predominantly nonwhite subjects without their input, resulting in a power imbalance.

There was an inherent bias in the inception of METEI. For one, the researchers assumed the Rapa Nui had been relatively isolated from the rest of the world when there was in fact a long history of interactions with countries outside the island, beginning with reports from the early 1700s through the late 1800s.

METEI also assumed that the Rapa Nui were genetically homogeneous, ignoring the island’s complex history of migration, slavery and disease. For example, the modern population of Rapa Nui are mixed race, from both Polynesian and South American ancestors. The population also included survivors of the African slave trade who were returned to the island and brought with them diseases, including smallpox.

This miscalculation undermined one of METEI’s key research goals: to assess how genetics affect disease risk. While the team published a number of studies describing the different fauna associated with the Rapa Nui, their inability to develop a baseline is likely one reason why there was no follow-up study following the completion of the airport on Easter Island in 1967.

Giving credit where it is due

Omissions in the origin stories of rapamycin reflect common ethical blind spots in how scientific discoveries are remembered.

Georges Nogrady carried soil samples back from Rapa Nui, one of which eventually reached Ayerst Research Laboratories. There, Surendra Sehgal and his team isolated what was named rapamycin, ultimately bringing it to market in the late 1990s as the immunosuppressant Rapamune. While Sehgal’s persistence was key in keeping the project alive through corporate upheavals – going as far as to stash a culture at home – neither Nogrady nor the METEI was ever credited in his landmark publications.

Although rapamycin has generated billions of dollars in revenue, the Rapa Nui people have received no financial benefit to date. This raises questions about Indigenous rights and biopiracy, which is the commercialization of Indigenous knowledge.

Agreements like the United Nations’s 1992 Convention on Biological Diversity and the 2007 Declaration on the Rights of Indigenous Peoples aim to protect Indigenous claims to biological resources by encouraging countries to obtain consent and input from Indigenous people and provide redress for potential harms before starting projects. However, these principles were not in place during METEI’s time.

Close-up headshots of row of people wearing floral headdresses in a dim room
The Rapa Nui have received little to no acknowledgment for their role in the discovery of rapamycin. Esteban Felix/AP Photo

Some argue that because the bacteria that produces rapamycin has since been found in other locations, Easter Island’s soil was not uniquely essential to the drug’s discovery. Moreover, because the islanders did not use rapamycin or even know about its presence on the island, some have countered that it is not a resource that can be “stolen.”

However, the discovery of rapamycin on Rapa Nui set the foundation for all subsequent research and commercialization around the molecule, and this only happened because the people were the subjects of study. Formally recognizing and educating the public about the essential role the Rapa Nui played in the eventual discovery of rapamycin is key to compensating them for their contributions.

In recent years, the broader pharmaceutical industry has begun to recognize the importance of fair compensation for Indigenous contributions. Some companies have pledged to reinvest in communities where valuable natural products are sourced. However, for the Rapa Nui, pharmaceutical companies that have directly profited from rapamycin have not yet made such an acknowledgment.

Ultimately, METEI is a story of both scientific triumph and social ambiguities. While the discovery of rapamycin has transformed medicine, the expedition’s impact on the Rapa Nui people is more complicated. I believe issues of biomedical consent, scientific colonialism and overlooked contributions highlight the need for a more critical examination and awareness of the legacy of breakthrough scientific discoveries.

Ted Powers, Professor of Molecular and Cellular Biology, University of California, Davis

This article is republished from The Conversation under a Creative Commons license. Read the original article.

ooOOoo

Ted Powers explains in the last paragraph: “Ultimately, METEI is a story of both scientific triumph and social ambiguities.” Then goes on to say: “I believe issues of biomedical consent, scientific colonialism and overlooked contributions highlight the need for a more critical examination and awareness of the legacy of breakthrough scientific discoveries.”

If only it was simple!

Another lucky aspect of living in Oregon

We have not lost our wolves.

Here is a partial list of the wolf situation in Oregon:

  • Return & Recovery: Wolves reappeared in Oregon around 2008, descendants of wolves reintroduced in Idaho, growing to many packs across the state.
  • Management: The Oregon Department of Fish and Wildlife (ODFW) manages wolves under the Oregon Wolf Conservation and Management Plan.
  • Zones: Management differs between eastern and western Oregon, with federal listing status changing, affecting management authority.
  • Conservation Efforts: Organizations like Oregon Wild advocate for strong wolf protections, habitat connectivity, and non-lethal conflict deterrence.

However, in eastern North America things are not so good; as this article from The Coversation explains:

ooOOoo

With wolves absent from most of eastern North America, can coyotes replace them?

Coyotes have expanded across the United States. Davis Huber/500px via Getty Images

Alex Jensen, North Carolina State University

Imagine a healthy forest, home to a variety of species: Birds are flitting between tree branches, salamanders are sliding through leaf litter, and wolves are tracking the scent of deer through the understory. Each of these animals has a role in the forest, and most ecologists would argue that losing any one of these species would be bad for the ecosystem as a whole.

Unfortunately – whether due to habitat loss, overhunting or introduced specieshumans have made some species disappear. At the same time, other species have adapted to us and spread more widely.

As an ecologist, I’m curious about what these changes mean for ecosystems – can these newly arrived species functionally replace the species that used to be there? I studied this process in eastern North America, where some top predators have disappeared and a new predator has arrived.

A primer on predators

Wolves used to roam across every state east of the Mississippi River. But as the land was developed, many people viewed wolves as threats and wiped most of them out. These days, a mix of gray wolves and eastern wolves persist in Canada and around the Great Lakes, which I collectively refer to as northeastern wolves. There’s also a small population of red wolves – a distinct and smaller species of wolf – on the coast of North Carolina.

The disappearance of wolves may have given coyotes the opportunity they needed. Starting around 1900, coyotes began expanding their range east and have now colonized nearly all of eastern North America.

A map of central to eastern North America. Parts of southern Canada are marked as 'current northeast wolf range,' the northeast US is marked 'current coyote and historical wolf range,' the rest of the southern and eastern US is marked 'red wolf range' and to the west is marked 'coyote range ~1900.'
Coyotes colonized most of eastern North America in the wake of wolf extirpation. Jensen 2025, CC BY

So are coyotes the new wolf? Can they fill the same ecological role that wolves used to? These are the questions I set out to answer in my paper published in August 2025 in the Stacks Journal. I focused on their role as predators – what they eat and how often they kill big herbivores, such as deer and moose.

What’s on the menu?

I started by reviewing every paper I could find on wolf or coyote diets, recording what percent of scat or stomach samples contained common food items such as deer, rabbits, small rodents or fruit. I compared northeastern wolf diets to northeastern coyote diets and red wolf diets to southeastern coyote diets.

I found two striking differences between wolf and coyote diets. First, wolves ate more medium-sized herbivores. In particular, they ate more beavers in the northeast and more nutria in the southeast. Both of these species are large aquatic rodents that influence ecosystems – beaver dam building changes how water moves, sometimes undesirably for land owners, while nutria are non-native and damaging to wetlands.

Second, wolves have narrower diets overall. They eat less fruit and fewer omnivores such as birds, raccoons and foxes, compared to coyotes. This means that coyotes are likely performing some ecological roles that wolves never did, such as dispersing fruit seeds in their poop and suppressing populations of smaller predators.

A diagram showing the diets of wolves and coyotes
Grouping food items by size and trophic level revealed some clear differences between wolf and coyote diets. Percents are the percent of samples containing each level, and stars indicate a statistically significant difference. Alex Jensen, CC BY

Killing deer and moose

But diet studies alone cannot tell the whole story – it’s usually impossible to tell whether coyotes killed or scavenged the deer they ate, for example. So I also reviewed every study I could find on ungulate mortality – these are studies that tag deer or moose, track their survival, and attribute a cause of death if they die.

These studies revealed other important differences between wolves and coyotes. For example, wolves were responsible for a substantial percentage of moose deaths – 19% of adults and 40% of calves – while none of the studies documented coyotes killing moose. This means that all, or nearly all, of moose in coyote diets is scavenged.

Coyotes are adept predators of deer, however. In the northeast, they killed more white-tailed deer fawns than wolves did, 28% compared to 15%, and a similar percentage of adult deer, 18% compared to 22%. In the southeast, coyotes killed 40% of fawns but only 6% of adults.

Rarely killing adult deer in the southeast could have implications for other members of the ecological community. For example, after killing an adult ungulate, many large predators leave some of the carcass behind, which can be an important source of food for scavengers. Although there is no data on how often red wolves kill adult deer, it is likely that coyotes are not supplying food to scavengers to the same extent that red wolves do.

Two wolves walking through the grass. One is sniffing a dead deer on the ground.
Wolves and coyotes both kill a substantial proportion of deer, but they focus on different age classes. imageBROKER/Raimund Linke via Getty Images

Are coyotes the new wolves?

So what does this all mean? It means that although coyotes eat some of the same foods, they cannot fully replace wolves. Differences between wolves and coyotes were particularly pronounced in the northeast, where coyotes rarely killed moose or beavers. Coyotes in the southeast were more similar to red wolves, but coyotes likely killed fewer nutria and adult deer.

The return of wolves could be a natural solution for regions where wildlife managers desire a reduction in moose, beaver, nutria or deer populations.

Yet even with the aid of reintroductions, wolves will likely never fully recover their former range in eastern North America – there are too many people. Coyotes, on the other hand, do quite well around people. So even if wolves never fully recover, at least coyotes will be in those places partially filling the role that wolves once had.

Indeed, humans have changed the world so much that it may be impossible to return to the way things were before people substantially changed the planet. While some restoration will certainly be possible, researchers can continue to evaluate the extent to which new species can functionally replace missing species.

Alex Jensen, Postdoctoral Associate – Wildlife Ecology, North Carolina State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

ooOOoo

So there is a big difference between the Eastern seaboard and the Western States of the USA. We live in the forested part of Southern Oregon but I have never seen a wolf despite Alex Jensen writing that they inhabit this area.

The wolf is a magnificent animal, the forerunner of the dog. I would love to see a wolf!

Picking a fight ….

…. with a mathematical function!

This is another republication of a George Monbiot post. The title of his post is Total Futility Rate.

It is another great article!

ooOOoo

Total Futility Rate

Posted on15th December 2025

Let’s focus our campaigning on things we can actually change.

By George Monbiot, published as a BlueSky thread, 15th December 2025

Because the issue of population change is so widely misunderstood, I’ll seek to lay it out simply. This note explains why there is almost nothing anyone can do to change the global population trajectory, both as numbers rise, then as they fall.

The residual rise is due to:

A. The birth rate 60-100 years ago, which created a larger current base population. This means more children being born even as birth rates are in radical decline. The global total fertility rate, by the way, is now 2.2, just above the replacement rate of 2.1.

B. Infant mortality has declined very fast and longevity has risen very fast. Again, there’s nothing you can do about either of those things and, I hope, nothing you would want to.

All women should have total reproductive freedom and full access to modern birth control. Because it’s a fundamental rightNot because old men on other continents want them to have fewer children. Even if total reproductive freedom became universal now, it would scarcely nudge the curve, due to the factors mentioned above.

Before long, people will be fretting instead about the downwave, a very rapid decline in populations as the impact of 60+ years of falling birth rates overtakes the effects mentioned above. There’s almost nothing we can do about that either. It’s about as locked in as any human behaviour can be. As the opportunity costs of childcare rise (i.e. as prosperity increases), the birth rate declines.

Of course, if economic and social life collapsed, the process might go into reverse, and birth rates could be expected to rise again. But is that really what you want? For my part, I’m heartily sick of people who think collapse is the answer to anything.

In the short run, we can survive the decline in wealthy countries by reopening the door to immigrants, which would also offer sanctuary to people fleeing from the climate breakdown and conflict we’ve caused overseas. Two wins, in other words. In the long run, we’ll steadily shuffle away.

Whether you think that’s good or bad will not affect the outcome. I see demographic change as an underlying factor, like gravity, we simply have to adapt to as well as we can. If you want to pick a fight with a mathematical function, be my guest. But it seems to me as if you’re wasting your time.

But surely there’s no harm in it? Surely we can seek, however hopelessly, to change the population trajectory while also campaigning against environmental breakdown, inequality, injustice? Some people who worry about population do. But in my experience, most fixate on population to the exclusion of other issues.

Something must be done about them breeding too fast, rather than us consuming too fast. All too often, residual population growth is used as a scapegoat to shift blame from rich-world impacts, which means that the people in places where growth is still occurring are themselves scapegoated. The result, broadly speaking, is wealthy white people pointing the finger at much poorer Black and Brown people and saying, “You’re the problem.” It’s more than a distraction, it’s a grim and sometimes racist alternative to effective action. It’s an excuse for inaction.

So yes, do both if you want to, while being aware that one activity is useful and the other is futile. But be aware that for most population obsessives, it’s either/or, and is used to avoid moral responsibility and effective citizenship.

http://www.monbiot.com

ooOOoo

If you read this you will understand why Mr Monbiot explains clearly the changes in the global demographics: That the global population is falling. My own guess is that in the lifespans of those who today are in their teens, the global population will be remarkably lower. I can’t forecast the changes that will bring about but I’m certain they will be significant.

George’s last point is key “(It) is used to avoid moral responsibility and effective citizenship.

‘Tolly’ finds something really special

I’m indebted to George Monbiot for this article, and ‘Tolly’ as a nickname for Iain Tolhurst.

Many articles from people that I follow online pass through my ‘inbox’.

But there was something special about a recent article by George Monbiot that was published in the Guardian on December 5th and I have great pleasure in republishing it here, with George’s permission.

ooOOoo

Shaking It Up

Posted on 7th December 2025

A eureka moment in the pub could help transform our understanding of the ground beneath our feet.

By George Monbiot, published in the Guardian 5th December 2025

It felt like walking up a mountain during a temperature inversion. You struggle through fog so dense you can scarcely see where you’re going. Suddenly, you break through the top of the cloud, and the world is laid out before you. It was that rare and remarkable thing: a eureka moment.
For the past three years, I’d been struggling with a big and frustrating problem. In researching my book Regenesis, I’d been working closely with Iain Tolhurst (Tolly), a pioneering farmer who had pulled off something extraordinary. Almost everywhere, high-yield farming means major environmental harm, due to the amount of fertiliser, pesticides and (sometimes) irrigation water and deep ploughing required. Most farms with apparently small environmental impacts produce low yields. This, in reality, means high impacts, as more land is needed to produce a given amount of food. But Tolly has found the holy grail of agriculture: high and rising yields with minimal environmental harm.

He uses no fertiliser, no animal manure and no pesticides. His techniques, the result of decades of experiment and observation, appear to enrich the crucial relationships between crops and microbes in the soil, through which soil nutrients must pass. It seems that Tolly has, in effect, “trained” his soil bacteria to release nutrients when his crops require them (a process called mineralisation), and lock them up when his crops aren’t growing (immobilisation), ensuring they don’t leach from the soil.

So why the frustration? Well, Tolly has inspired many other growers to attempt the same techniques. Some have succeeded, with excellent results. Others have not. And no one can work out why. It’s likely to have something to do with soil properties. But what?

Not for the first time, I had stumbled into a knowledge gap so wide that humanity could fall through it. Soil is a fantastically complex biological structure, like a coral reef, built and sustained by the creatures that inhabit it. It supplies 99% of our calories. Yet we know less about it than any other identified ecosystem. It’s almost a black box.

Many brilliant scientists have devoted their lives to its study. But there are major barriers. Most soil properties cannot be seen without digging, and if you dig a hole, you damage the structures you’re trying to investigate. As a result, studying even basic properties is cumbersome, time-consuming and either very expensive or simply impossible at scale. To measure the volume of soil in a field, for example, you need to take hundreds of core samples. But as soil depths can vary greatly from one metre to the next, your figure relies on extrapolation. This makes it very hard to tell whether you’re losing soil or gaining it. Measuring bulk density (the amount of soil in a given volume, which shows how compacted it might be), or connected porosity (the tiny catacombs created by lifeforms, a crucial measure of soil health), or soil carbon – at scale – is even harder.

So farmers must guess. Partly because they cannot see exactly what the soil needs, many of their inputs – fertilisers, irrigation, deep ploughing – are wasted. Roughly two-thirds of the nitrogen fertiliser they apply, and between 50% and 80% of their phosphorus, is lost. These lost minerals cause algal blooms in rivers, dead zones at sea, costs for water users and global heating. Huge amounts of irrigation water are also wasted. Farmers sometimes “subsoil” their fields – ploughing that is deep and damaging – because they suspect compaction. The suspicion is often wrong.

Our lack of knowledge also inhibits the development of a new agriculture, which may, as Tolly has done, allow farmers to replace chemical augmentation with biological enhancement.

So when I came to write the book, I made a statement so vague that it reads like an admission of defeat: we needed to spend heavily on “an advanced science of the soil”, and use it to deliver a “greener revolution”. While we know almost nothing about the surface of our own planet, billions are spent on the Mars Rover programme, exploring the barren regolith there. What we needed, I argued, is an Earth Rover programme, mapping the world’s agricultural soils at much finer resolution.

I might as well have written “something must be done!” The necessary technologies simply did not exist. I sank into a stygian gloom.

At the same time, Tarje Nissen-Meyer, then a professor of geophysics at the University of Oxford, was grappling with a different challenge. Seismology is the study of waves passing through a solid medium. Thanks to billions from the oil and gas industry, it has become highly sophisticated. Tarje wanted to use this powerful tool for the opposite purpose – ecological improvement. Already, with colleagues, he had deployed seismology to study elephant behaviour in Kenya. Not only was it highly effective, but his team also discovered it could identify animal species walking through the savannah by their signature footfall.

By luck we were both attached, in different ways, to Wolfson College, Oxford, where we met in February 2022. I saw immediately that he was a thoughtful man – a visionary. I suggested a pint in The Magdalen Arms.

I explained my problem, and we talked about the limits of existing technologies. Was seismology being used to study soil, I asked. He’d never heard of it. “I guess it’s not a suitable technology then?” No, he told me, “soil should be a good medium for seismology. In fact, we need to filter out the soil noise when we look at the rocks.” “So if it’s noise, it could be signal?” “Definitely.”

We stared at each other. Time seemed to stall. Could this really be true?

Over the next three days, Tarje conducted a literature search. Nothing came up. I wrote to Prof Simon Jeffery, an eminent soil scientist at Harper Adams University, whose advice I’d found invaluable when researching the book. I set up a Zoom call. He would surely explain that we were barking up the wrong tree.

Simon is usually a reserved man. But when he had finished questioning Tarje, he became quite animated. “All my life I’ve wanted to ‘see’ into the soil,” he said. “Maybe now we can.” I was introduced to a brilliant operations specialist, Katie Bradford, who helped us build an organisation. We set up a non-profit called the Earth Rover Program, to develop what we call “soilsmology”; to build open-source hardware and software cheap enough to be of use to farmers everywhere; and to create, with farmers, a global, self-improving database. This, we hope, might one day incorporate every soil ecosystem: a kind of Human Genome Project for the soil.

We later found that some scientists had in fact sought to apply seismology to soil, but it had not been developed into a programme, partly because the approaches used were not easily scalable.

My role was mostly fixer, finding money and other help. We received $4m (£3m) in start-up money from the Bezos Earth Fund. This may cause some discomfort, but our experience has been entirely positive: the fund has helped us do exactly what we want. We also got a lot of pro-bono help from the law firm Hogan Lovells.

Tarje, now at the University of Exeter, and Simon began assembling their teams. They would need to develop an ultra-high-frequency variant of seismology. A big obstacle was cost. In 2022, suitable sensors cost $10,000 (£7,500) apiece. They managed to repurpose other kit: Tarje found that a geophone developed by a Slovakian experimental music outfitworked just as well, and cost only $100. Now one of our scientists, Jiayao Meng, is developing a sensor for about $10. In time, we should be able to use the accelerometers in mobile phones, reducing the cost to zero. As for generating seismic waves, we get all the signal we need by hitting a small metal plate with a welder’s hammer.

On its first deployment, our team measured the volume of a peat bog that had been studied by scientists for 50 years. After 45 minutes in the field, they produced a preliminary estimate suggesting that previous measurements were out by 20%. Instead of extrapolating the peat depth from point samples, they could see the wavy line where the peat met the subsoil. The implications for estimating carbon stocks are enormous.

We’ve also been able to measure bulk density at a very fine scale; to track soil moisture (as part of a wider team); to start building the AI and machine learning tools we need; and to see the varying impacts of different agricultural crops and treatments. Next we’ll work on measuring connected porosity, soil texture and soil carbon; scaling up to the hectare level and beyond; and on testing the use of phones as seismometers. We now have further funding, from the UBS Optimus Foundation, hubs on three continents and a big international team.

Eventually, we hope, any farmer anywhere, rich or poor, will be able to get an almost instant readout from their soil. As more people use the tools, building the global database, we hope these readouts will translate into immediate useful advice. The tools should also revolutionise soil protection: the EU has issued a soil-monitoring law, but how can it be implemented? Farmers are paid for their contributions “to improve soil health and soil resilience”, but what this means in practice is ticking a box on a subsidy form: there’s no sensible way of checking.

We’re not replacing the great work of other soil scientists but, developing our methods alongside theirs, we believe we can fill part of the massive knowledge gap. As one of the farmers we’re working with, Roddy Hall, remarks, the Earth Rover Program could “take the guesswork out of farming”. One day it might help everyone arrive at that happy point: high yields with low impacts. Seismology promises to shake things up.

http://www.monbiot.com

ooOOoo

George Monbiot puts his finger precisely on the point of his article: “While we know almost nothing about the surface of our own planet, billions are spent on the Mars Rover programme.

That magical night sky

Or more to the point of this article: Dark Matter.

Along with huge numbers of other people, I have long been interested in the Universe. Thus this article from The Conversation seemed a good one to share with you.

ooOOoo

When darkness shines: How dark stars could illuminate the early universe

NASA’s James Webb Space Telescope has spotted some potential dark star candidates. NASA, ESA, CSA, and STScI

Alexey A. Petrov, University of South Carolina

Scientists working with the James Webb Space Telescope discovered three unusual astronomical objects in early 2025, which may be examples of dark stars. The concept of dark stars has existed for some time and could alter scientists’ understanding of how ordinary stars form. However, their name is somewhat misleading.

“Dark stars” is one of those unfortunate names that, on the surface, does not accurately describe the objects it represents. Dark stars are not exactly stars, and they are certainly not dark.

Still, the name captures the essence of this phenomenon. The “dark” in the name refers not to how bright these objects are, but to the process that makes them shine — driven by a mysterious substance called dark matter. The sheer size of these objects makes it difficult to classify them as stars.

As a physicist, I’ve been fascinated by dark matter, and I’ve been trying to find a way to see its traces using particle accelerators. I’m curious whether dark stars could provide an alternative method to find dark matter.

What makes dark matter dark?

Dark matter, which makes up approximately 27% of the universe but cannot be directly observed, is a key idea behind the phenomenon of dark stars. Astrophysicists have studied this mysterious substance for nearly a century, yet we haven’t seen any direct evidence of it besides its gravitational effects. So, what makes dark matter dark?

A pie chart showing the composition of the universe. The largest proportion is 'dark energy,' at 68%, while dark matter makes up 27% and normal matter 5%. The rest is neutrinos, free hydrogen and helium and heavy elements.
Despite physicists not knowing much about it, dark matter makes up around 27% of the universe. Visual Capitalist/Science Photo Library via Getty Images

Humans primarily observe the universe by detecting electromagnetic waves emitted by or reflected off various objects. For instance, the Moon is visible to the naked eye because it reflects sunlight. Atoms on the Moon’s surface absorb photons – the particles of light – sent from the Sun, causing electrons within atoms to move and send some of that light toward us.

More advanced telescopes detect electromagnetic waves beyond the visible spectrum, such as ultraviolet, infrared or radio waves. They use the same principle: Electrically charged components of atoms react to these electromagnetic waves. But how can they detect a substance – dark matter – that not only has no electric charge but also has no electrically charged components?

Although scientists don’t know the exact nature of dark matter, many models suggest that it is made up of electrically neutral particles – those without an electric charge. This trait makes it impossible to observe dark matter in the same way that we observe ordinary matter.

Dark matter is thought to be made of particles that are their own antiparticles. Antiparticles are the “mirror” versions of particles. They have the same mass but opposite electric charge and other properties. When a particle encounters its antiparticle, the two annihilate each other in a burst of energy.

If dark matter particles are their own antiparticles, they would annihilate upon colliding with each other, potentially releasing large amounts of energy. Scientists predict that this process plays a key role in the formation of dark stars, as long as the density of dark matter particles inside these stars is sufficiently high. The dark matter density determines how often dark matter particles encounter, and annihilate, each other. If the dark matter density inside dark stars is high, they would annihilate frequently.

What makes a dark star shine?

The concept of dark stars stems from a fundamental yet unresolved question in astrophysics: How do stars form? In the widely accepted view, clouds of primordial hydrogen and helium — the chemical elements formed in the first minutes after the Big Bang, approximately 13.8 billion years ago — collapsed under gravity. They heated up and initiated nuclear fusion, which formed heavier elements from the hydrogen and helium. This process led to the formation of the first generation of stars.

Two bright clouds of gas condensing around a small central region
Stars form when clouds of dust collapse inward and condense around a small, bright, dense core. NASA, ESA, CSA, and STScI, J. DePasquale (STScI), CC BY-ND

In the standard view of star formation, dark matter is seen as a passive element that merely exerts a gravitational pull on everything around it, including primordial hydrogen and helium. But what if dark matter had a more active role in the process? That’s exactly the question a group of astrophysicists raised in 2008.

In the dense environment of the early universe, dark matter particles would collide with, and annihilate, each other, releasing energy in the process. This energy could heat the hydrogen and helium gas, preventing it from further collapse and delaying, or even preventing, the typical ignition of nuclear fusion.

The outcome would be a starlike object — but one powered by dark matter heating instead of fusion. Unlike regular stars, these dark stars might live much longer because they would continue to shine as long as they attracted dark matter. This trait would make them distinct from ordinary stars, as their cooler temperature would result in lower emissions of various particles.

Can we observe dark stars?

Several unique characteristics help astronomers identify potential dark stars. First, these objects must be very old. As the universe expands, the frequency of light coming from objects far away from Earth decreases, shifting toward the infrared end of the electromagnetic spectrum, meaning it gets “redshifted.” The oldest objects appear the most redshifted to observers.

Since dark stars form from primordial hydrogen and helium, they are expected to contain little to no heavier elements, such as oxygen. They would be very large and cooler on the surface, yet highly luminous because their size — and the surface area emitting light — compensates for their lower surface brightness.

They are also expected to be enormous, with radii of about tens of astronomical units — a cosmic distance measurement equal to the average distance between Earth and the Sun. Some supermassive dark stars are theorized to reach masses of roughly 10,000 to 10 million times that of the Sun, depending on how much dark matter and hydrogen or helium gas they can accumulate during their growth.

So, have astronomers observed dark stars? Possibly. Data from the James Webb Space Telescope has revealed some very high-redshift objects that seem brighter — and possibly more massive — than what scientists expect of typical early galaxies or stars. These results have led some researchers to propose that dark stars might explain these objects.

Artist's impression of the James Webb telescope, which has a hexagonal mirror made up of smaller hexagons, and sits on a rhombus-shaped spacecraft.
The James Webb Space Telescope, shown in this illustration, detects light coming from objects in the universe. Northrup Grumman/NASA

In particular, a recent study analyzing James Webb Space Telescope data identified three candidates consistent with supermassive dark star models. Researchers looked at how much helium these objects contained to identify them. Since it is dark matter annihilation that heats up those dark stars, rather than nuclear fusion turning helium into heavier elements, dark stars should have more helium.

The researchers highlight that one of these objects indeed exhibited a potential “smoking gun” helium absorption signature: a far higher helium abundance than one would expect in typical early galaxies.

Dark stars may explain early black holes

What happens when a dark star runs out of dark matter? It depends on the size of the dark star. For the lightest dark stars, the depletion of dark matter would mean gravity compresses the remaining hydrogen, igniting nuclear fusion. In this case, the dark star would eventually become an ordinary star, so some stars may have begun as dark stars.

Supermassive dark stars are even more intriguing. At the end of their lifespan, a dead supermassive dark star would collapse directly into a black hole. This black hole could start the formation of a supermassive black hole, like the kind astronomers observe at the centers of galaxies, including our own Milky Way.

Dark stars might also explain how supermassive black holes formed in the early universe. They could shed light on some unique black holes observed by astronomers. For example, a black hole in the galaxy UHZ-1 has a mass approaching 10 million solar masses, and is very old – it formed just 500 million years after the Big Bang. Traditional models struggle to explain how such massive black holes could form so quickly.

The idea of dark stars is not universally accepted. These dark star candidates might still turn out just to be unusual galaxies. Some astrophysicists argue that matter accretion — a process in which massive objects pull in surrounding matter — alone can produce massive stars, and that studies using observations from the James Webb telescope cannot distinguish between massive ordinary stars and less dense, cooler dark stars.

Researchers emphasize that they will need more observational data and theoretical advancements to solve this mystery.

Alexey A. Petrov, Professor of physics and astronomy, University of South Carolina

This article is republished from The Conversation under a Creative Commons license. Read the original article.

ooOOoo

Alexey Petrov says at the end of the article that more observations are required before we humans know all the answers. I have no doubt that in time we will have the answers.

Cambridge University and our brains.

Scientists have identified five ages of the human brain.

Neuroscientists at the University of Cambridge have identified five “major epochs” of brain structure over the course of a human life, as our brains rewire to support different ways of thinking while we grow, mature, and ultimately decline.”

So wrote Fred Lewsey. Fred is the Communications Manager (Research) and is Responsible for: School of the Humanities and Social Sciences. (And I took this from this site.) He went on to report that: Four major turning points around ages nine, 32, 66 and 83 create five broad eras of neural wiring over the average human lifespan.

Being in my early 80’s I was most interested in that last turning point. This is the information about that era:

The last turning point comes around age 83, and the final brain structure epoch is entered. While data is limited for this era, the defining feature is a shift from global to local, as whole brain connectivity declines even further, with increased reliance on certain regions.     

“Looking back, many of us feel our lives have been characterised by different phases. It turns out that brains also go through these eras,” added senior author Prof Duncan Astle, Professor of Neuroinformatics at Cambridge.

“Many neurodevelopmental, mental health and neurological conditions are linked to the way the brain is wired. Indeed, differences in brain wiring predict difficulties with attention, language, memory, and a whole host of different behaviours”

“Understanding that the brain’s structural journey is not a question of steady progression, but rather one of a few major turning points, will help us identify when and how its wiring is vulnerable to disruption.”

The research was supported by the Medical Research Council, Gates Foundation and Templeton World Charitable Foundation. The full report may be read here: https://www.newscientist.com/article/2505656-your-brain-undergoes-four-dramatic-periods-of-change-from-age-0-to-90

Finally, here is an image of this amazing organ that we humans have.

The DNA of dogs.

What is revealed in most dogs’ genes.

On November 24th this year, The Conversation published an article that spoke of the ancient closeness, as in genetically, of wolves and dogs.

I share it with you. It is a fascinating read.

ooOOoo

Thousands of genomes reveal the wild wolf genes in most dogs’ DNA.

Modern wolves and dogs both descend from an ancient wolf population that lived alongside woolly mammoths and cave bears. Iza Lyson/500px Prime via Getty Images

Audrey T. Lin, Smithsonian Institution and Logan Kistler, Smithsonian Institution

Dogs were the first of any species that people domesticated, and they have been a constant part of human life for millennia. Domesticated species are the plants and animals that have evolved to live alongside humans, providing nearly all of our food and numerous other benefits. Dogs provide protection, hunting assistance, companionship, transportation and even wool for weaving blankets.

Dogs evolved from gray wolves, but scientists debate exactly where, when and how many times dogs were domesticated. Ancient DNA evidence suggests that domestication happened twice, in eastern and western Eurasia, before the groups eventually mixed. That blended population was the ancestor of all dogs living today.

Molecular clock analysis of the DNA from hundreds of modern and ancient dogs suggests they were domesticated between around 20,000 and 22,000 years ago, when large ice sheets covered much of Eurasia and North America. The first dog identified in the archaeological record is a 14,000-year-old pup found in Bonn-Oberkassel, Germany, but it can be difficult to tell based on bones whether an animal was an early domestic dog or a wild wolf.

Despite the shared history of dogs and wolves, scientists have long thought these two species rarely mated and gave birth to hybrid offspring. As an evolutionary biologist and a molecular anthropologist who study domestic plants and animals, we wanted to take a new look at whether dog-wolf hybridization has really been all that uncommon.

Little interbreeding in the wild

Dogs are not exactly descended from modern wolves. Rather, dogs and wolves living today both derive from a shared ancient wolf population that lived alongside woolly mammoths and cave bears.

In most domesticated species, there are often clear, documented patterns of gene flow between the animals that live alongside humans and their wild counterparts. Where wild and domesticated animals’ habitats overlap, they can breed with each other to produce hybrid offspring. In these cases, the genes from wild animals are folded into the genetic variation of the domesticated population.

For example, pigs were domesticated in the Near East over 10,000 years ago. But when early farmers brought them to Europe, they hybridized so frequently with local wild boar that almost all of their Near Eastern DNA was replaced. Similar patterns can be seen in the endangered wild Anatolian and Cypriot mouflon that researchers have found to have high proportions of domestic sheep DNA in their genomes. It’s more common than not to find evidence of wild and domesticated animals interbreeding through time and sharing genetic material.

That wolves and dogs wouldn’t show that typical pattern is surprising, since they live in overlapping ranges and can freely interbreed.

Dog and wolf behavior are completely different, though, with wolves generally organized around a family pack structure and dogs reliant on humans. When hybridization does occur, it tends to be when human activities – such as habitat encroachment and hunting – disrupt pack dynamics, leading female wolves to strike out on their own and breed with male dogs. People intentionally bred a few “wolf dog” hybrid types in the 20th century, but these are considered the exception.

a wolfish looking dog lies on the ground behind a metal fence
Luna Belle, a resident of the Wolf Sanctuary of Pennsylvania, which is home to both wolves and wolf dogs. Audrey Lin.

Tiny but detectable wolf ancestry

To investigate how much gene flow there really has been between dogs and wolves after domestication, we analyzed 2,693 previously published genomes, making use of massive publicly available datasets.

These included 146 ancient dogs and wolves covering about 100,000 years. We also looked at 1,872 modern dogs, including golden retrievers, Chihuahuas, malamutes, basenjis and other well-known breeds, plus more unusual breeds from around the world such as the Caucasian ovcharka and Swedish vallhund.

Finally, we included genomes from about 300 “village dogs.” These are not pets but are free-living animals that are dependent on their close association with human environments.

We traced the evolutionary histories of all of these canids by looking at maternal lineages via their mitochondrial genomes and paternal lineages via their Y chromosomes. We used highly sensitive computational methods to dive into the dogs’ and wolves’ nuclear genomes – that is, the genetic material contained in their cells’ nuclei.

We found the presence of wild wolf genes in most dog genomes and the presence of dog genes in about half of wild wolf genomes. The sign of the wolf was small but it was there, in the form of tiny, almost imperceptible chunks of continuous wolf DNA in dogs’ chromosomes. About two-thirds of breed dogs in our sample had wolf genes from crossbreeding that took place roughly 800 generations ago, on average.

While our results showed that larger, working dogs – such as sled dogs and large guardian dogs that protect livestock – generally have more wolf ancestry, the patterns aren’t universal. Some massive breeds such as the St. Bernard completely lack wolf DNA, but the tiny Chihuahua retains detectable wolf ancestry at 0.2% of its genome. Terriers and scent hounds typically fall at the low end of the spectrum for wolf genes.

a dog curled up on the sidewalk in a town
A street – or free-ranging – dog in Tbilisi, Georgia. Alexkom000/Wikimedia Commons, CC BY

We were surprised that every single village dog we tested had pieces of wolf DNA in their genomes. Why would this be the case? Village dogs are free-living animals that make up about half the world’s dogs. Their lives can be tough, with short life expectancy and high infant mortality. Village dogs are also associated with pathogenic diseases, including rabies and canine distemper, making them a public health concern.

More often than predicted by chance, the stretches of wolf DNA we found in village dog genomes contained genes related to olfactory receptors. We imagine that olfactory abilities influenced by wolf genes may have helped these free-living dogs survive in harsh, volatile environments.

The intertwining of dogs and wolves

Because dogs evolved from wolves, all of dogs’ DNA is originally wolf DNA. So when we’re talking about the small pieces of wolf DNA in dog genomes, we’re not referring to that original wolf gene pool that’s been kicking around over the past 20,000 years, but rather evidence for dogs and wolves continuing to interbreed much later in time.

A wolf-dog hybrid with one of each kind of parent would carry 50% dog and 50% wolf DNA. If that hybrid then lived and mated with dogs, its offspring would be 25% wolf, and so on, until we see only small snippets of wolf DNA present.

The situation is similar to one in human genomes: Neanderthals and humans share a common ancestor around half a million years ago. However, Neanderthals and our species, Homo sapiens, also overlapped and interbred in Eurasia as recently as a few thousand generations ago, shortly before Neanderthals disappeared. Scientists can spot the small pieces of Neanderthal DNA in most living humans in the same way we can see wolf genes within most dogs.

two small tan dogs walking on pavement on a double lead leash
Even tiny Chihuahuas contain a little wolf within their doggy DNA. Westend61 via Getty Images

Our study updates the previously held belief that hybridization between dogs and wolves is rare; interactions between these two species do have visible genetic traces. Hybridization with free-roaming dogs is considered a threat to conservation efforts of endangered wolves, including Iberian, Italian and Himalayan wolves. However, there also is evidence that dog-wolf mixing might confer genetic advantages to wolves as they adapt to environments that are increasingly shaped by humans.

Though dogs evolved as human companions, wolves have served as their genetic lifeline. When dogs encountered evolutionary challenges such as how to survive harsh climates, scavenge for food in the streets or guard livestock, it appears they’ve been able to tap into wolf ancestry as part of their evolutionary survival kit.

Audrey T. Lin, Research Associate in Anthropology, Smithsonian Institution and Logan Kistler, Curator of Archaeobotany and Archaeogenomics, National Museum of Natural History, Smithsonian Institution

This article is republished from The Conversation under a Creative Commons license. Read the original article.

ooOOoo

Well thanks to Audrey Lin and Logan Kistler for this very interesting study. So even modern dogs have visible traces of wolf in their DNA. It is yet another example of the ability of modern science to discover facts that were unknown a few decades ago.

A worldwide myth.

An incredible fact, as in the truth, that almost nobody will accept.

Until the 22nd November, 2025, that is last Saturday, I believed this lie. A lie that spoke of the dangers, the hazards, the imminent end of the world as I believed it; as in Climate Change!

Very few of you will change your minds, of that I’m sure.

Nonetheless, I am going to republish a long article that was sent to me by my buddy, Dan Gomez.

ooOOoo

Latest Science Further Exposes Lies About Rising Seas

By Vijay Jayaraj

It’s all too predictable: A jet-setting celebrity or politician wades ceremoniously into hip-deep surf for a carefully choreographed photo op, while proclaiming that human-driven sea-level rise will soon swallow an island nation. Of course, the water is deeper than the video’s pseudoscience, which is as shallow as the theatrics.

The scientific truth is simple: Sea levels are rising, but the rate of rise has not accelerated. A new peer-reviewed study confirms what many other studies have already shown – that the steady rise of oceans is a centuries-long process, not a runaway crisis triggered by modern emissions of carbon dioxide (CO2).

For the past 12,000 years, during our current warm epoch known as the Holocene, sea levels have risen and fallen dramatically. For instance, during the 600-year Little Ice Age, which ended in the mid-19th century, sea levels dropped quite significantly.

The natural warming that began in the late 1600s got to a point around 1800 where loss of glacial ice in the summer began to exceed winter accumulation and glaciers began to shrink and seas to rise. By 1850, full-on glacial retreat was underway.

Thus, the current period of gradual sea-level increase began between 1800-1860, preceding any significant anthropogenic CO2 emissions by many decades. The U.S. Department of Energy’s 2025 critical review on carbon dioxide and climate change confirms this historical perspective.

“There is no good, sufficient or convincing evidence that global sea level rise is accelerating –there is only hypothesis and speculation. Computation is not evidence and unless the results can be practically viewed and measured in the physical world, such results must not be presented as such,” notes Kip Hansen, researcher and former U.S. Coast Guard captain.

New Study Confirms No Crisis

While activists speak of “global sea-level rise,” the ocean’s surface does not behave like water in a bathtub. Regional currents, land movements, and local hydrology all influence relative sea level. This is why local tide gauge data is important. As Hansen warns, “Only actually measured, validated raw data can be trusted. … You have to understand exactly what’s been measured and how.”

In addition, local tide-gauge data cannot be extrapolated to represent global sea level. This is because the geographic coverage of suitable locations for gauges is often poor, with the majority concentrated in the Northern Hemisphere. Latin America and Africa are severely under-represented in the global dataset.  Hansen says, “The global tide gauge record is quantitatively problematic, but individual records can be shown as qualitative evidence for a lack of sea-level rise acceleration.”

A new 2025 study provides confirmation. Published in the Journal of Marine Science and Engineering, the study systematically dismantles the narrative of accelerating sea-level rise. It analyzed empirically derived long-term rates from datasets of sufficient length – at least 60 years – and incorporated long-term tide signals from suitable locations.

The startling conclusion: Approximately 95% of monitoring locations show no statistically significant acceleration of sea-level rise. It was found that the steady rate of sea-level rise – averaging around 1 to 2 millimeters per year globally – mirrors patterns observed over the past 150 years.

The study suggests that projections by the Intergovernmental Panel on Climate Change (IPCC), which often predicts rates as high as 3 to 4 millimeters per year by 2100, overestimate the annual rise by approximately 2 millimeters.

This discrepancy is not trivial. It translates into billions of dollars in misguided infrastructure investments and adaptation policies, which assume a far worse scenario than what the data support. Because we now know that local, non-climatic phenomena are a plausible cause of the accelerated sea level rise measured locally.

Rather than pursuing economically destructive initiatives to reduce greenhouse gas emissions on the basis of questionable projections and erroneous climate science, money and time should be invested in supporting coastal communities with accurate data for practical planning to adapt to local sea level rise.

Successful adaptation strategies have existed for centuries in regions prone to flooding and sea-level variations. The Netherlands is an excellent example of how engineering solutions can protect coastal populations even living below sea level.

Rising seas are real but not a crisis. What we have is a manageable, predictable phenomenon to which societies have adapted for centuries. To inflate it into an existential threat is to mislead, misallocate, and ultimately harm the communities that policymakers claim to protect.

This commentary was first published by PJ Media on September 10, 2025.

Vijay Jayaraj is a Science and Research Associate at the CO₂ Coalition, Fairfax, Virginia. He holds an M.S. in environmental sciences from the University of East Anglia and a postgraduate degree in energy management from Robert Gordon University, both in the U.K., and a bachelor’s in engineering from Anna University, India.

ooOOoo

I shall be returning to this important topic soon. Probably by republishing that 2025 Study referred to in the above article.

I hope that you read this post.

Thank you, Dan.

We humans are still evolving.

An article in The Conversation caught my eye.

We must never forget that evolution is always happening.

So without any more from me here is that article.

ooOOoo

If evolution is real, then why isn’t it happening now? An anthropologist explains that humans actually are still evolving

Inuit people such as these Greenlanders have evolved to be able to eat fatty foods with a low risk of getting heart disease. Olivier Morin/AFP via Getty Images

Michael A. Little, Binghamton University, State University of New York


If evolution is real, then why is it not happening now? – Dee, Memphis, Tennessee


Many people believe that we humans have conquered nature through the wonders of civilization and technology. Some also believe that because we are different from other creatures, we have complete control over our destiny and have no need to evolve. Even though lots of people believe this, it’s not true.

Like other living creatures, humans have been shaped by evolution. Over time, we have developed – and continue to develop – the traits that help us survive and flourish in the environments where we live.

I’m an anthropologist. I study how humans adapt to different environments. Adaptation is an important part of evolution. Adaptations are traits that give someone an advantage in their environment. People with those traits are more likely to survive and pass those traits on to their children. Over many generations, those traits become widespread in the population.

The role of culture

We humans have two hands that help us skillfully use tools and other objects. We are able to walk and run on two legs, which frees our hands for these skilled tasks. And we have large brains that let us reason, create ideas and live successfully with other people in social groups.

All of these traits have helped humans develop culture. Culture includes all of our ideas and beliefs and our abilities to plan and think about the present and the future. It also includes our ability to change our environment, for example by making tools and growing food.

Although we humans have changed our environment in many ways during the past few thousand years, we are still changed by evolution. We have not stopped evolving, but we are evolving right now in different ways than our ancient ancestors. Our environments are often changed by our culture.

We usually think of an environment as the weather, plants and animals in a place. But environments include the foods we eat and the infectious diseases we are exposed to.

A very important part of the environment is the climate and what kinds of conditions we can live in. Our culture helps us change our exposure to the climate. For example, we build houses and put furnaces and air conditioners in them. But culture doesn’t fully protect us from extremes of heat, cold and the sun’s rays.

a man runs after one of several goats in a dry, dusty landscape
The Turkana people in Kenya have evolved to survive with less water than other people, which helps them live in a desert environment. Tony Karumba/AFP via Getty Images

Here are some examples of how humans have evolved over the past 10,000 years and how we are continuing to evolve today.

The power of the sun’s rays

While the sun’s rays are important for life on our planet, ultraviolet rays can damage human skin. Those of us with pale skin are in danger of serious sunburn and equally dangerous kinds of skin cancer. In contrast, those of us with a lot of skin pigment, called melanin, have some protection against damaging ultraviolet rays from sunshine.

People in the tropics with dark skin are more likely to thrive under frequent bright sunlight. Yet, when ancient humans moved to cloudy, cooler places, the dark skin was not needed. Dark skin in cloudy places blocked the production of vitamin D in the skin, which is necessary for normal bone growth in children and adults.

The amount of melanin pigment in our skin is controlled by our genes. So in this way, human evolution is driven by the environment – sunny or cloudy – in different parts of the world.

The food that we eat

Ten thousand years ago, our human ancestors began to tame or domesticate animals such as cattle and goats to eat their meat. Then about 2,000 years later, they learned how to milk cows and goats for this rich food. Unfortunately, like most other mammals at that time, human adults back then could not digest milk without feeling ill. Yet a few people were able to digest milk because they had genes that let them do so.

Milk was such an important source of food in these societies that the people who could digest milk were better able to survive and have many children. So the genes that allowed them to digest milk increased in the population until nearly everyone could drink milk as adults.

This process, which occurred and spread thousands of years ago, is an example of what is called cultural and biological co-evolution. It was the cultural practice of milking animals that led to these genetic or biological changes.

Other people, such as the Inuit in Greenland, have genes that enable them to digest fats without suffering from heart diseases. The Turkana people herd livestock in Kenya in a very dry part of Africa. They have a gene that allows them to go for long periods without drinking much water. This practice would cause kidney damage in other people because the kidney regulates water in your body.

These examples show how the remarkable diversity of foods that people eat around the world can affect evolution.

gray scale microscope image of numerous blobs
These bacteria caused a devastating pandemic nearly 700 years ago that led humans to evolve resistance to them.
Image Point FR/NIH/NIAID/BSIP/Universal Images Group via Getty Images

Diseases that threaten us

Like all living creatures, humans have been exposed to many infectious diseases. During the 14th century a deadly disease called the bubonic plague struck and spread rapidly throughout Europe and Asia. It killed about one-third of the population in Europe. Many of those who survived had a specific gene that gave them resistance against the disease. Those people and their descendants were better able to survive epidemics that followed for several centuries.

Some diseases have struck quite recently. COVID-19, for instance, swept the globe in 2020. Vaccinations saved many lives. Some people have a natural resistance to the virus based on their genes. It may be that evolution increases this resistance in the population and helps humans fight future virus epidemics.

As human beings, we are exposed to a variety of changing environments. And so evolution in many human populations continues across generations, including right now.


Michael A. Little, Distinguished Professor Emeritus of Anthropology, Binghamton University, State University of New York

This article is republished from The Conversation under a Creative Commons license. Read the original article.

ooOOoo

This was published for the Curious Kids section of The Conversation.

However, I believe this is relevant for those adults as well who are interested in the subject. I’m in my 80’s and find this deeply interesting.

Death – it comes to all of us!

Irrespective of our believe.

There are only two days in our lives when we live for less than twenty-four hours: the day we are born and the day when we die!

I was born in November, 1944 and that makes me eighty-one. I was born as a result of an affair between my mother and my father. The family genes favour girls over boys, as in seven girls for every boy, and the son is normally the first born. My mother lost her first child, it was a boy. Then my mother had a second baby. Surprise, surprise, it was another son – me!!

I say this as an introduction to a post on The Conversation.

ooOOoo

Americans are unprepared for the expensive and complex process of aging – a geriatrician explains how they can start planning

It’s important for older adults to plan for their care as they age. Maskot/Maskot via Getty Images

Kahli Zietlow, University of Michigan

Hollywood legend Gene Hackman and his wife, Betsy Arakawa, were found dead in their home in February 2025. Hackman had been living with Alzheimer’s and depended on Arakawa as his full-time caregiver.

Disturbingly, postmortem data suggests that Arakawa died of complications from pulmonary Hantavirus several days before her husband passed. The discordant times of death point to a grim scenario: Hackman was left alone and helpless, trapped in his home after his wife’s death.

The couple’s story, while shocking, is not unique. It serves as a warning for our rapidly aging society. The U.S. population is aging, but most Americans are not adequately planning to meet the needs of older adulthood.

As a geriatric physician and medical educator, I care for older adults in both inpatient and outpatient settings. My research and clinical work focus on dementia and surrogate decision-making.

In my experience, regardless of race, education or socioeconomic status, there are some universal challenges that all people face with aging and there are steps everyone can take to prepare.

Aging is inevitable but unpredictable

Aging is an unpredictable, highly individualized process that varies depending on a person’s genetics, medical history, cognitive status and socioeconomic factors.

The majority of older Americans report a strong sense of purpose and self-worth. Many maintain a positive view of their overall health well into their 70s and 80s.

But at some point, the body starts to slow down. Older adults experience gradual sensory impairment, loss of muscle mass and changes in their memory. Chronic diseases are more likely with advancing age.

According to the U.S. Census Bureau, 46% of adults over age 75 live with at least one physical disability, and this proportion grows with age. Even those without major health issues may find that routine tasks like yard work, housekeeping and home repairs become insurmountable as they enter their 80s and 90s.

Some may find that subtle changes in memory make it difficult to manage household finances or keep track of their medications. Others may find that vision loss and slowed reaction time make it harder to safely drive. Still others may struggle with basic activities needed to live independently, such as bathing or using a toilet. All of these changes threaten older adults’ ability to remain independent.

The costs of aging

Nearly 70% of older Americans will require long-term care in their lifetime, whether through paid, in-home help or residence in an assisted living facility or nursing home.

But long-term care is expensive. In 2021, the Federal Long Term Care Insurance Program reported that the average hourly rate for in-home care was US$27. An assisted living apartment averaged $4,800 per month, and a nursing home bed cost nearly double that, at a rate of $276 per day.

Many Americans may be shocked to discover that these costs are not covered by Medicare or other traditional medical insurance. Long-term care insurance covers the cost of long-term care, such as in-home care or nursing home placement. However, what is covered varies from plan to plan. Currently, only a small minority of Americans have long-term care insurance due to high premiums and complex activation rules.

I am not aware of any high-quality, peer-reviewed studies that have demonstrated the cost effectiveness of long-term care insurance. Yet, for many Americans, paying for care out of pocket is simply not an option.

Medicaid can provide financial support for long-term care but only for older adults with very low income and minimal assets – criteria most Americans don’t meet until they have nearly exhausted their savings.

Those receiving Medicaid to cover the costs of long-term care have essentially no funds for anything other than medical care, room and board. And proposed federal financial cuts may further erode the limited support services available. In Michigan, for example, Medicaid-covered nursing home residents keep only $60 per month for personal needs. If individuals receive monthly income greater than $60 – for instance, from Social Security or a pension – the extra money would go toward the cost of nursing home care.

Those who don’t qualify for Medicaid or cannot afford private care often rely on family and friends for unpaid assistance, but not everyone has such support systems.

A nurse helps an older man shave.
Older adults may end up needing help with day-to-day personal care. Klaus Vedfelt/DigitalVision via Getty Images

Planning for the care you want

Beyond financial planning, older adults can make an advance directive. This is a set of legal documents that outlines preferences for medical care and asset management if a person becomes incapacitated. However, only about 25% of Americans over 50 have completed such documentation.

Without medical and financial powers of attorney in place, state laws determine who makes critical decisions, which may or may not align with a person’s wishes. For instance, an estranged child may have more legal authority over an incapacitated parent than their long-term but unmarried partner. Seniors without clear advocates risk being placed under court-appointed guardianship – a restrictive and often irreversible process.

In addition to completing advance directives, it is important that older adults talk about their wishes with their loved ones. Conversations about disability, serious illness and loss of independence can be difficult, but these discussions allow your loved ones to advocate for you in the event of a health crisis.

Who’s going to care for you?

Finding a caregiver is an important step in making arrangements for aging. If you are planning to rely on family or friends for some care, it helps to discuss this with them ahead of time and to have contingency plans in place. As the Hackman case demonstrates, if a caregiver is suddenly incapacitated, the older adult may be left in immediate danger.

Caregivers experience higher rates of stress, depression and physical illness compared with their peers. This is often exacerbated by financial strain and a lack of support. It helps if the people you will be relying on have expectations in place about their role.

For instance, some people may prefer placement in a facility rather than relying on a loved one if they can no longer use the bathroom independently. Others may wish to remain in their homes as long as this is a feasible option.

Connecting with available resources

There are local and federal initiatives designed to help aging adults find and get the help they need. The Centers for Medicare & Medicaid Services recently launched the GUIDE Model to improve care and quality of life for both those suffering from dementia and their caregivers.

This program connects caregivers with local resources and provides a 24-hour support line for crises. While GUIDE, which stands for Guiding an Improved Dementia Experience, is currently in the pilot stage, it is slowly expanding, and I am hopeful that it will eventually expand to provide enhanced coverage for those suffering from dementia nationwide.

The Program for All-Inclusive Care of the Elderly helps dual-eligible Medicare and Medicaid recipients remain at home as they age. This program provides comprehensive services including medical care, a day center and home health services.

Area agencies on aging are regionally located and can connect older adults with local resources, based on availability and income, such as meals, transportation and home modifications that help maintain independence.

Unfortunately, all of these programs and others that support older adults are threatened by recent federal budget cuts. The tax breaks and spending cuts bill, which was signed into law in July 2025, will result in progressive reductions to Medicaid funding over the next 10 years. These cuts will decrease the number of individuals eligible for Medicaid and negatively affect how nursing homes are reimbursed.

The government funding bill passed on Nov. 13 extends current Medicare funding through Jan. 30, 2026, at which point Medicare funding may be reduced.

Even as the future of these programs remains uncertain, it’s important for older adults and their caregivers to be intentional in making plans and to familiarize themselves with the resources available to them.

Kahli Zietlow, Physician and Clinical Associate Professor of Geriatrics & Internal Medicine, University of Michigan

This article is republished from The Conversation under a Creative Commons license. Read the original article.

ooOOoo

This article is a wakeup call for me, because I have no plan in place.

While I think about death more frequently than I used to, the fact that I don’t have plan is naive: I must get myself to a stage where I have a plan, and soon! I guess I am not the only person in their 80s without a plan!