Category: Technology

An eclipse seen from space

This is beautiful.

I have always been interested in the space flights of the astronaughts. I am sure that I join millions of others who feel the same.

So this article by Deana L. Weibel, Professor of Anthropology at Grand Valley State University is terrific.

ooOOoo

Seeing an eclipse from Earth is awe‑inspiring – for astronauts seeing one from space, the scene was even more grand

During a total solar eclipse, the Sun is barely visible behind the Moon. Roger Sorensen

Deana L. Weibel, Grand Valley State University

The astronauts on Artemis II’s trip to the Moon in April 2026 didn’t just have an amazing journey through space. They also saw something extraordinary. They were the first humans to see a total solar eclipse from space.

A solar eclipse happens when the Moon moves in front of the Sun. In a total eclipse, the Sun’s central disc is covered completely.

From Earth, the circle of the Sun is about the same size as the circle of the Moon. With the bright circle blocked, you can see the undulating rays of the Sun’s corona, or outer atmosphere, that are normally too dim to be observed.

Moon covering most, then all, then most of the Sun
Composite image of moments before, during and after totality. NASA/Aubrey Gemignani

I’m a cultural anthropologist who studies awe-inspiring aspects of space exploration. I have been lucky enough to have seen two total solar eclipses. The first one was in Nebraska in 2017, the second in Indiana in 2024.

During my second total eclipse, the period of totality – that short span when you can remove your protective glasses and look directly at the eclipse – lasted close to 4 minutes. I saw waves of diffuse light snaking around an ink-black hole in the sky. It looked very wrong – almost alien.

On Aug. 12, 2026, there will be another total solar eclipse, visible only from Greenland, Iceland, Spain and the Balearic Islands of the Mediterranean. Some fortunate viewers in Spain and nearby islands may see the eclipse just before sunset, low on the horizon. The Moon illusion, a phenomenon where the Moon looks bigger when it’s near the horizon, might make this eclipse look unusually large.

Unusual eclipse perspectives

Astronauts will occasionally also have less common eclipse experiences. I interviewed one I call by the pseudonym “Jackie” in my research about astronauts’ experiences of awe. She was part of an astronaut training group that did a flight exercise during a total solar eclipse.

Jackie and her squad flew their jets in the shadow of the Moon. This lengthened their time in totality because they could follow and stay within the shadow. Jackie was most impressed with how the Sun’s corona seemed to shift and ripple.

“It’s not static … it’s alive,” she told me.

On April 6, 2026, the astronauts of NASA’s Artemis II mission saw another kind of unusual eclipse as they flew around the Moon. At one point during their flight, the Moon and the spacecraft aligned so that the Moon was directly between them and the Sun, blocking the Sun’s disk in a way that looks very different from what we see on Earth.

Astronaut Victor Glover said it felt like they “just went sci-fi.” https://www.youtube.com/embed/YLjPci5bo1k?wmode=transparent&start=0 ‘An impressive sight’: The Artemis II crew were the first humans to observe a solar eclipse from near the Moon.

The astronauts were so close to the Moon that the Moon looked bigger than the Sun and hid more of its bright circle. Earth was also in view, and sunlight reflected from the Earth onto the Moon in a phenomenon NASA calls “earthshine.” This dim light is very similar to the moonlight that shines on the Earth at night.

Imagine the Sun hidden behind the Moon, creating a hazy halo around the Moon’s edges. At the same time, faint light reflected from Earth softly illuminates the Moon, revealing mountains and craters in a dim twilight. Now imagine this striking scene lasting 54 minutes.

This sight was, without a doubt, one of the most unusual eclipses ever seen by human eyes.

Although Artemis’ astronauts are trained to think scientifically, this experience propelled them into a state of awe. They talked openly about how their brains were “not processing” what they observed. While NASA kept them busy with a variety of tasks, the sound of emotion and excitement in their voices as they broadcast live from their lunar flyby was unmistakable.

An eclipse visible from space - the Moon is shown shadowed with some sunlight visible behind it, and part of the Orion capsule shown off to the left.
The Moon during a solar eclipse on April 6, 2026, photographed by one of the Orion spacecraft’s cameras during Artemis II. Earth is reflecting sunlight at the left edge of the Moon, called ‘earthshine.’ NASA

The psychology of awe

Researchers have studied the effects of awe on the human brain, including awe felt during solar eclipses. Moments of wonder like these can transform how you feel and even how you think, making you more thoughtful and open-minded.

In my own work I’ve found these experiences can change how astronauts understand their own place in the universe.

One astronaut said she gained an awareness of the fragility of our planet that now shapes everything she does, while another described becoming more curious after returning to Earth. A third said the awe he experienced in lunar orbit changed his understanding of time and infinity.

Space travel creates many opportunities for awe, but a solar eclipse from behind the Moon, as Mission Commander Reid Wiseman put it, required “20 new superlatives.”

It’s an experience most of the earthbound eclipse-chasers heading to Greenland or Iceland or Spain this summer will only dream about. Whether eclipses happen in space or on Earth, though, close encounters with the grandeur of our universe can make you feel profoundly human.

Deana L. Weibel, Professor of Anthropology, Grand Valley State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

ooOOoo

In this difficuly world at present, this is a perfect article. As was written, “…. the awe he experienced in lunar orbit changed his understanding of time and infinity.

Picture Parade Five Hundred and Twenty-One

More NASA images.

And what images.

NASA celebrates Hubble’s 36th anniversary with a new image of the Trifid Nebula, a star-forming region it first captured in 1997. The telescope leveraged almost its full operational lifetime to show us changes in the nebula on human time scales with an improved camera.
NASA, ESA, STScI; Image Processing: Joseph DePasquale (STScI)

There is more information on the NASA website.

Now a YouTube video.

What terrific images from Hubble.

Artemis images

A unique record taken by the crew.

Human-created photos of this historic mission cannot be replace by articificial intelligence (AI).

This is the reason I am republishing an article from The Conversation.

ooOOoo

Artemis II crew brought a human eye and storytelling vision to the photos they took on their mission

Astronaut Jeremy Hansen takes a picture through the camera shroud covering a window on the Orion spacecraft. NASA

Christye Sisson, Rochester Institute of Technology

In early April 2026, the Artemis II mission captivated me and millions of people watching from across the world. The crew’s courage, skill and infectious wonder served as tangible proof of human persistence and technological achievement, all against the mysterious backdrop of space.

People back on Earth got to witness the mission through remarkable photos of space captured by astronauts. Images created and shared by astronauts underscore how photography builds a powerful, authentic connection that goes beyond what technology alone can capture.

As a photographer and the director of the Rochester Institute of Technology’s School of Photographic Arts and Sciences, I am especially drawn to how these photographs have been at the center of the public’s collective experience of this mission.

In an era when image authenticity is often questioned and with the capabilities of autonomous, AI-driven imaging, NASA’s choice to train astronauts in photography has placed meaning over convenience and prioritized their human perspectives and creativity.

Capturing space from the crew’s perspective

Photography was not originally placed as a high priority in NASA’s Apollo era. The astronauts only took photographs if they had the chance and all their other tasks were complete.

An image of the entire Earth from space.
‘The Blue Marble’ view of the Earth as seen by the Apollo 17 crew in 1972. NASA

Thanks largely in part to public response to those images from Apollo, including “Earthrise” and the “Blue Marble” being widely credited for helping catalyze the modern environmental movement, NASA shifted its approach to utilize photography to help capture the public’s imagination by training their astronauts in photographic practices.

The Artemis II mission’s photographs have helped cut through the increasing volume of artificially generated images circulating on social media. NASA’s social media releases of the crew’s photographs have garnered thousands of shares and comments.

This excitement could be explained by the novelty of photos from space, but these images also distinguish themselves as products of astronauts experiencing these sights and interpreting them through their photographs. These differences require an important distinction around where technology ends and humanity begins.

An astronaut looking out the window of the Orion spacecraft, where the full moon is visible in space.
NASA astronaut Reid Wiseman watches the Moon from one of the Orion spacecraft’s windows. NASA

Human perspective versus AI tools

Photography has long integrated AI-powered software and data-driven tools in a variety of ways: to process raw images, fill in missing color information, drive precise focus and guide image editing, among others. These modern technological assists help human photographers realize their vision.

Artificial intelligence is also increasingly capable of operating machinery competently and autonomously, from cars to drones and cameras.

And AI can generate convincing, realistic images and videos from nothing more than a text prompt, using readily available tools.

Researchers train AI to mimic patterns informed by millions of sample images, and the algorithm can then either take or create a photograph based on what it predicts would be the most likely version of a successful, believable image.

Human-created photos are rooted in direct observation, intent and lived experience, while AI images – or choices made by AI-driven tools – are not. While both can produce compelling and believable visuals, the human photographs carry emotional power because the photographer is drawing from their experiences and perspective in that moment to tell an authentic story.

Artemis II photographs resonate, not only because they are historic, but because they reflect the deliberate choices and intent of a human being in that specific moment and context. The exposure, camera setting, lens choice and composition are all dictated by the astronaut’s vision, skill, perspective and experience. Each image is unique in comparison with the others. These choices give the images narrative power, anchoring them in human perspective.

The Earth shown partially shadowed beyond the Moon in space
NASA’s ‘Earthset’ photo captured by the Artemis II crew. NASA

Images to tell a story

Photographers choose what to include in the final version of their image to tell a story. In the Artemis II images, this human perspective comes out. In the “Earthset” photo, you see a striking juxtaposition of the Moon’s monochromatic, textured surface in the foreground against a slivered, bright Earth.

The choice to include both in the frame contrasts these objects literally and figuratively, inviting comparison. It creates a narrative where Earth is contrasted against the Moon – life is contrasted against the absence of it.

Another photo shows the nightside of the whole Earth, featuring the Sun’s halo, auroras and city lights. The choice to include the subtle framing of the window of the capsule in the lower left corner reminds the viewer where and how this image was captured: by a human, inside a capsule, hurtling through space. That detail grounds the photograph in the human perspective.

Both photos are reminiscent of Earthrise and the Blue Marble. These past images hold a place in the global collective consciousness, shaped by a shared historical moment.

The Artemis II photographs are anchored in this collective moment of lived human experience, yet also shaped by each astronaut’s viewpoint. The crew’s unique perspectives exemplify photography’s transformative power by inviting viewers to engage emotionally and intellectually with their journey. These photographs share the astronauts’ awe and wonder and affirm the value of human creativity and its ability to connect us in a captured moment.

Christye Sisson, Professor of Photographic Sciences, Rochester Institute of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

ooOOoo

I am going to repeat a sentence towards the end of the article: “These past images hold a place in the global collective consciousness, shaped by a shared historical moment.”

That global collective consciousness!

Technology and ageing

This article hits home!

I find it is very hard to keep current on new technological developments. I am well past being young but still fascinated by science and technology.

Thus, it seemed like one that I should publish.

ooOOoo

Constant technology changes throw seniors a curve – and add to caregivers’ load

Shifting interfaces and frequent updates challenge elders and increase the burdens on people who try to help them. Maskot via Getty Images

Debaleena Chattopadhyay, University of Illinois Chicago

This past Christmas, I helped my parents choose a water filter. The latest “smart” models all came with a smartphone app that promised to monitor filter life, track water quality and automatically request service. Yet my father, age 75, and mother, 67, were quick to reject them in favor of a nondigital model.

“Every time it updates or I forget how to use it, we’ll have to call you,” my dad said.

As an only child living 8,000 miles (12,875 kilometers) away, I didn’t need convincing. My parents are aging in place and don’t need traditional caregiving – they cook, drive and manage their home just fine. Instead, I provide what I call technology caregiving: helping them with their digital activities of daily living, from online banking to booking theater tickets.

But as the tech industry shifts toward artificial intelligence agents and generative user interfaces – promising to make devices smarter than ever – I am bracing for this invisible workload to become heavier, not lighter. In addition to being a technology caregiver, I’m a computer scientist who studies human-computer interaction.

Technology caregiving

Technology caregiving is the act of helping someone use digital tools. While this isn’t entirely new – people have long helped grandparents program VCRs and connect parents’ desktop computers to the internet – the stakes have changed.

Today, digitization is ubiquitous. Helping with these tools is no longer just occasional unpaid tech support – it is a form of continuous caregiving essential for maintaining independence. For example, even the simple act of clipping coupons has gone digital – marginalizing older adults who are unable to navigate store apps to access these discounts.

People often view older adults as resistant to technology, but recent years – particularly since the COVID-19 pandemic – have shattered that myth. While gaps in internet access and device ownership remain, they are no longer major barriers to technology access.

an older woman uses a laptop computer at a table
Today’s seniors are not tech-averse, but constant updates and interface changes make using technology more difficult for them. Jose Luis Raota/Moment via Getty Images

The emerging crisis is not about access, but effective use. Many older adults are now online and willing to use these tools, but they require frequent help from family, friends or communities.

The innovation tax

The problem isn’t just that devices and apps are getting complex; it’s that they are constantly changing. Frequent software updates and shifting interfaces can be frustrating for all users, but they turn familiar tools into foreign concepts for older adults.

This unpredictability is about to accelerate. Take generative user interfaces, which designers can use to dynamically generate an interface in minutes. Pair them with AI agents, and the system can assume the designer’s role, taking independent actions based on how it perceives a user’s intent or need.

If the “Pay Bill” button is in a different place every third time you open a particular app because an AI decided to optimize the interface, you might feel perpetually incompetent if you can’t quickly locate it. While the industry calls this personalization, for an older adult it is a moving target.

This relentless pace of change – even when intended to be helpful – is directly at odds with age-related cognitive changes. And this dynamic is continuing with the new generation of seniors. They may be more eager to adopt new tools than the last, but wanting to use technology is not the same as being able to use it when the rules are constantly changing.

To navigate a brand new or shifting interface, your brain relies on fluid intelligence: the ability to reason, solve novel problems and ignore distractions on the fly. Unlike the knowledge that people accumulate over time, fluid intelligence naturally declines with age.

When an app updates or an AI optimizes a layout, it forces the user to discard their hard-won mental models and start over. For an older adult, this isn’t just a minor inconvenience; it is a taxing job for their working memory.

As an older adult participant in a study my colleagues and I conducted put it:

“I had a computer on my desk in 1980, OK, when nobody else did. So this is not a foreign language, but the changes that are made with little to no explanation and then things that you knew how to do have either changed or disappeared completely, that is the stuff that absolutely drives me, and I will tell you, every other older adult in America nuts.”

Help the helper

I believe that the way forward is to stop treating tech support as an afterthought and start designing for the technology caregiver. Digital literacy training for seniors and encouraging designing technologies for all users are important but not enough; it’s important to build tools that share the burden.

Two promising paths are emerging. First, cognitive accessibility features – like AI assistants that find buried buttons or provide real-time tech support – can offload tasks from the caregiver. Second, tools for caregivers are beginning to move beyond simply controlling device feature access to capabilities such as allowing authorized access for banking as co-users, or recording personalized instructions.

These tools will also need to be tailored: Family caregivers need different tools than community helpers like libraries and senior centers.

In the age of AI, innovation shouldn’t be a tax on the aging brain – it should help bridge the digital divide.

Debaleena Chattopadhyay, Assistant Professor of Computer Science, University of Illinois Chicago

This article is republished from The Conversation under a Creative Commons license. Read the original article.

ooOOoo

I like the idea of having a technology caregiver. I like the idea very much.

Other stars, other worlds.

The science of looking at other worlds is amazing.

With so much going wrong, primarily politically, in the world, I just love turning to news about distant places; and by distant I mean hugely so. That is why I am republishing this item from The Conversation about other stars.

ooOOoo

NASA’s Pandora telescope will study stars in detail to learn about the exoplanets orbiting them

A new NASA mission will study exoplanets around distant stars. European Space Agency, CC BY-SA

Daniel Apai, University of Arizona

On Jan. 11, 2026, I watched anxiously at the tightly controlled Vandenberg Space Force Base in California as an awe-inspiring SpaceX Falcon 9 rocket carried NASA’s new exoplanet telescope, Pandora, into orbit.

Exoplanets are worlds that orbit other stars. They are very difficult to observe because – seen from Earth – they appear as extremely faint dots right next to their host stars, which are millions to billions of times brighter and drown out the light reflected by the planets. The Pandora telescope will join and complement NASA’s James Webb Space Telescope in studying these faraway planets and the stars they orbit.

I am an astronomy professor at the University of Arizona who specializes in studies of planets around other stars and astrobiology. I am a co-investigator of Pandora and leading its exoplanet science working group. We built Pandora to shatter a barrier – to understand and remove a source of noise in the data – that limits our ability to study small exoplanets in detail and search for life on them.

Observing exoplanets

Astronomers have a trick to study exoplanet atmospheres. By observing the planets as they orbit in front of their host stars, we can study starlight that filters through their atmospheres.

These planetary transit observations are similar to holding a glass of red wine up to a candle: The light filtering through will show fine details that reveal the quality of the wine. By analyzing starlight filtered through the planets’ atmospheres, astronomers can find evidence for water vapor, hydrogen, clouds and even search for evidence of life. Researchers improved transit observations in 2002, opening an exciting window to new worlds.

When a planet passes in front of its star, astronomers can measure the dip in brightness, and see how the light filtering through the planet’s atmosphere changes.

For a while, it seemed to work perfectly. But, starting from 2007, astronomers noted that starspots – cooler, active regions on the stars – may disturb the transit measurements.

In 2018 and 2019, then-Ph.D. student Benjamin V. Rackham, astrophysicist Mark Giampapa and I published a series of studies showing how darker starspots and brighter, magnetically active stellar regions can seriously mislead exoplanets measurements. We dubbed this problem “the transit light source effect.”

Most stars are spotted, active and change continuously. Ben, Mark and I showed that these changes alter the signals from exoplanets. To make things worse, some stars also have water vapor in their upper layers – often more prominent in starspots than outside of them. That and other gases can confuse astronomers, who may think that they found water vapor in the planet.

In our papers – published three years before the 2021 launch of the James Webb Space Telescope – we predicted that the Webb cannot reach its full potential. We sounded the alarm bell. Astronomers realized that we were trying to judge our wine in light of flickering, unstable candles.

The birth of Pandora

For me, Pandora began with an intriguing email from NASA in 2018. Two prominent scientists from NASA’s Goddard Space Flight Center, Elisa Quintana and Tom Barclay, asked to chat. They had an unusual plan: They wanted to build a space telescope very quickly to help tackle stellar contamination – in time to assist Webb. This was an exciting idea, but also very challenging. Space telescopes are very complex, and not something that you would normally want to put together in a rush.

The Pandora spacecraft with an exoplanet and two stars in the background
Artist’s concept of NASA’s Pandora Space Telescope. NASA’s Goddard Space Flight Center/Conceptual Image Lab, CC BY

Pandora breaks with NASA’s conventional model. We proposed and built Pandora faster and at a significantly lower cost than is typical for NASA missions. Our approach meant keeping the mission simple and accepting somewhat higher risks.

What makes Pandora special?

Pandora is smaller and cannot collect as much light as its bigger brother Webb. But Pandora will do what Webb cannot: It will be able to patiently observe stars to understand how their complex atmospheres change.

By staring at a star for 24 hours with visible and infrared cameras, it will measure subtle changes in the star’s brightness and colors. When active regions in the star rotate in and out of view, and starspots form, evolve and dissipate, Pandora will record them. While Webb very rarely returns to the same planet in the same instrument configuration and almost never monitors their host stars, Pandora will revisit its target stars 10 times over a year, spending over 200 hours on each of them. https://www.youtube.com/embed/Inxe5Bgarj0?wmode=transparent&start=0 NASA’s Pandora mission will revolutionize the study of exoplanet atmospheres.

With that information, our Pandora team will be able to figure out how the changes in the stars affect the observed planetary transits. Like Webb, Pandora will observe the planetary transit events, too. By combining data from Pandora and Webb, our team will be able to understand what exoplanet atmospheres are made of in more detail than ever before.

After the successful launch, Pandora is now circling Earth about every 90 minutes. Pandora’s systems and functions are now being tested thoroughly by Blue Canyon Technologies, Pandora’s primary builder.

About a week after launch, control of the spacecraft will transition to the University of Arizona’s Multi-Mission Operation Center in Tucson, Arizona. Then the work of our science teams begins in earnest and we will begin capturing starlight filtered through the atmospheres of other worlds – and see them with a new, steady eye.

Daniel Apai, Associate Dean for Research and Professor of Astronomy and Planetary Sciences, University of Arizona

This article is republished from The Conversation under a Creative Commons license. Read the original article.

ooOOoo

It may not be for everyone but for me I find this news from NASA incredible. Well done The Conversation for publishing this article.

The downside of technology

A recent article in The Conversation prompted today’s post.

More and more I get concerned at some of the ways we are going.

ooOOoo

Deepfakes leveled up in 2025 – here’s what’s coming next

AI image and video generators now produce fully lifelike content. AI-generated image by Siwei Lyu using Google Gemini 3

Siwei Lyu, University at Buffalo

Over the course of 2025, deepfakes improved dramatically. AI-generated faces, voices and full-body performances that mimic real people increased in quality far beyond what even many experts expected would be the case just a few years ago. They were also increasingly used to deceive people.

For many everyday scenarios — especially low-resolution video calls and media shared on social media platforms — their realism is now high enough to reliably fool nonexpert viewers. In practical terms, synthetic media have become indistinguishable from authentic recordings for ordinary people and, in some cases, even for institutions.

And this surge is not limited to quality. The volume of deepfakes has grown explosively: Cybersecurity firm DeepStrike estimates an increase from roughly 500,000 online deepfakes in 2023 to about 8 million in 2025, with annual growth nearing 900%.

I’m a computer scientist who researches deepfakes and other synthetic media. From my vantage point, I see that the situation is likely to get worse in 2026 as deepfakes become synthetic performers capable of reacting to people in real time.

Dramatic improvements

Several technical shifts underlie this dramatic escalation. First, video realism made a significant leap thanks to video generation models designed specifically to maintain temporal consistency. These models produce videos that have coherent motion, consistent identities of the people portrayed, and content that makes sense from one frame to the next. The models disentangle the information related to representing a person’s identity from the information about motion so that the same motion can be mapped to different identities, or the same identity can have multiple types of motions.

These models produce stable, coherent faces without the flicker, warping or structural distortions around the eyes and jawline that once served as reliable forensic evidence of deepfakes.

Second, voice cloning has crossed what I would call the “indistinguishable threshold.” A few seconds of audio now suffice to generate a convincing clone – complete with natural intonation, rhythm, emphasis, emotion, pauses and breathing noise. This capability is already fueling large-scale fraud. Some major retailers report receiving over 1,000 AI-generated scam calls per day. The perceptual tells that once gave away synthetic voices have largely disappeared.

Third, consumer tools have pushed the technical barrier almost to zero. Upgrades from OpenAI’s Sora 2 and Google’s Veo 3 and a wave of startups mean that anyone can describe an idea, let a large language model such as OpenAI’s ChatGPT or Google’s Gemini draft a script, and generate polished audio-visual media in minutes. AI agents can automate the entire process. The capacity to generate coherent, storyline-driven deepfakes at a large scale has effectively been democratized.

This combination of surging quantity and personas that are nearly indistinguishable from real humans creates serious challenges for detecting deepfakes, especially in a media environment where people’s attention is fragmented and content moves faster than it can be verified. There has already been real-world harm – from misinformation to targeted harassment and financial scams – enabled by deepfakes that spread before people have a chance to realize what’s happening. https://www.youtube.com/embed/syNN38cu3Vw?wmode=transparent&start=0 AI researcher Hany Farid explains how deepfakes work and how good they’re getting.

The future is real time

Looking forward, the trajectory for next year is clear: Deepfakes are moving toward real-time synthesis that can produce videos that closely resemble the nuances of a human’s appearance, making it easier for them to evade detection systems. The frontier is shifting from static visual realism to temporal and behavioral coherence: models that generate live or near-live content rather than pre-rendered clips.

Identity modeling is converging into unified systems that capture not just how a person looks, but how they move, sound and speak across contexts. The result goes beyond “this resembles person X,” to “this behaves like person X over time.” I expect entire video-call participants to be synthesized in real time; interactive AI-driven actors whose faces, voices and mannerisms adapt instantly to a prompt; and scammers deploying responsive avatars rather than fixed videos.

As these capabilities mature, the perceptual gap between synthetic and authentic human media will continue to narrow. The meaningful line of defense will shift away from human judgment. Instead, it will depend on infrastructure-level protections. These include secure provenance such as media signed cryptographically, and AI content tools that use the Coalition for Content Provenance and Authenticity specifications. It will also depend on multimodal forensic tools such as my lab’s Deepfake-o-Meter.

Simply looking harder at pixels will no longer be adequate.

Siwei Lyu, Professor of Computer Science and Engineering; Director, UB Media Forensic Lab, University at Buffalo

This article is republished from The Conversation under a Creative Commons license. Read the original article.

ooOOoo

I hope with all my heart that lines of defense will rise to the challenge.

Found on Easter Island

Amazing what science can find out.

But while the science is brilliant the social implications are not so good. Read on!

ooOOoo

A billion-dollar drug was found in Easter Island soil – what scientists and companies owe the Indigenous people they studied

The Rapa Nui people are mostly invisible in the origin story of rapamycin. Posnov/Moment via Getty Images

Ted Powers, University of California, Davis

An antibiotic discovered on Easter Island in 1964 sparked a billion-dollar pharmaceutical success story. Yet the history told about this “miracle drug” has completely left out the people and politics that made its discovery possible.

Named after the island’s Indigenous name, Rapa Nui, the drug rapamycin was initially developed as an immunosuppressant to prevent organ transplant rejection and to improve the efficacy of stents to treat coronary artery disease. Its use has since expanded to treat various types of cancer, and researchers are currently exploring its potential to treat diabetes, neurodegenerative diseases and even aging. Indeed, studies raising rapamycin’s promise to extend lifespan or combat age-related diseases seem to be published almost daily. A PubMed search reveals over 59,000 journal articles that mention rapamycin, making it one of the most talked-about drugs in medicine.

Connected hexagonal structures
Chemical structure of rapamycin. Fvasconcellos/Wikimedia Commons

At the heart of rapamycin’s power lies its ability to inhibit a protein called the target of rapamycin kinase, or TOR. This protein acts as a master regulator of cell growth and metabolism. Together with other partner proteins, TOR controls how cells respond to nutrients, stress and environmental signals, thereby influencing major processes such as protein synthesis and immune function. Given its central role in these fundamental cellular activities, it is not surprising that cancer, metabolic disorders and age-related diseases are linked to the malfunction of TOR.

Despite being so ubiquitous in science and medicine, how rapamycin was discovered has remained largely unknown to the public. Many in the field are aware that scientists from the pharmaceutical company Ayerst Research Laboratories isolated the molecule from a soil sample containing the bacterium Streptomyces hydroscopicus in the mid-1970s. What is less well known is that this soil sample was collected as part of a Canadian-led mission to Rapa Nui in 1964, called the Medical Expedition to Easter Island, or METEI.

As a scientist who built my career around the effects of rapamycin on cells, I felt compelled to understand and share the human story underlying its origin. Learning about historian Jacalyn Duffin’s work on METEI completely changed how I and many of my colleagues view our own field.

Unearthing rapamycin’s complex legacy raises important questions about systemic bias in biomedical research and what pharmaceutical companies owe to the Indigenous lands from which they mine their blockbuster discoveries.

History of METEI

The Medical Expedition to Easter Island was the brainchild of a Canadian team comprised of surgeon Stanley Skoryna and bacteriologist Georges Nogrady. Their goal was to study how an isolated population adapted to environmental stress, and they believed the planned construction of an international airport on Easter Island offered a unique opportunity. They presumed that the airport would result in increased outside contact with the island’s population, resulting in changes in their health and wellness.

With funding from the World Health Organization and logistical support from the Royal Canadian Navy, METEI arrived in Rapa Nui in December 1964. Over the course of three months, the team conducted medical examinations on nearly all 1,000 island inhabitants, collecting biological samples and systematically surveying the island’s flora and fauna.

It was as part of these efforts that Nogrady gathered over 200 soil samples, one of which ended up containing the rapamycin-producing Streptomyces strain of bacteria.

It’s important to realize that the expedition’s primary objective was to study the Rapa Nui people as a sort of living laboratory. They encouraged participation through bribery by offering gifts, food and supplies, and through coercion by enlisting a long-serving Franciscan priest on the island to aid in recruitment. While the researchers’ intentions may have been honorable, it is nevertheless an example of scientific colonialism, where a team of white investigators choose to study a group of predominantly nonwhite subjects without their input, resulting in a power imbalance.

There was an inherent bias in the inception of METEI. For one, the researchers assumed the Rapa Nui had been relatively isolated from the rest of the world when there was in fact a long history of interactions with countries outside the island, beginning with reports from the early 1700s through the late 1800s.

METEI also assumed that the Rapa Nui were genetically homogeneous, ignoring the island’s complex history of migration, slavery and disease. For example, the modern population of Rapa Nui are mixed race, from both Polynesian and South American ancestors. The population also included survivors of the African slave trade who were returned to the island and brought with them diseases, including smallpox.

This miscalculation undermined one of METEI’s key research goals: to assess how genetics affect disease risk. While the team published a number of studies describing the different fauna associated with the Rapa Nui, their inability to develop a baseline is likely one reason why there was no follow-up study following the completion of the airport on Easter Island in 1967.

Giving credit where it is due

Omissions in the origin stories of rapamycin reflect common ethical blind spots in how scientific discoveries are remembered.

Georges Nogrady carried soil samples back from Rapa Nui, one of which eventually reached Ayerst Research Laboratories. There, Surendra Sehgal and his team isolated what was named rapamycin, ultimately bringing it to market in the late 1990s as the immunosuppressant Rapamune. While Sehgal’s persistence was key in keeping the project alive through corporate upheavals – going as far as to stash a culture at home – neither Nogrady nor the METEI was ever credited in his landmark publications.

Although rapamycin has generated billions of dollars in revenue, the Rapa Nui people have received no financial benefit to date. This raises questions about Indigenous rights and biopiracy, which is the commercialization of Indigenous knowledge.

Agreements like the United Nations’s 1992 Convention on Biological Diversity and the 2007 Declaration on the Rights of Indigenous Peoples aim to protect Indigenous claims to biological resources by encouraging countries to obtain consent and input from Indigenous people and provide redress for potential harms before starting projects. However, these principles were not in place during METEI’s time.

Close-up headshots of row of people wearing floral headdresses in a dim room
The Rapa Nui have received little to no acknowledgment for their role in the discovery of rapamycin. Esteban Felix/AP Photo

Some argue that because the bacteria that produces rapamycin has since been found in other locations, Easter Island’s soil was not uniquely essential to the drug’s discovery. Moreover, because the islanders did not use rapamycin or even know about its presence on the island, some have countered that it is not a resource that can be “stolen.”

However, the discovery of rapamycin on Rapa Nui set the foundation for all subsequent research and commercialization around the molecule, and this only happened because the people were the subjects of study. Formally recognizing and educating the public about the essential role the Rapa Nui played in the eventual discovery of rapamycin is key to compensating them for their contributions.

In recent years, the broader pharmaceutical industry has begun to recognize the importance of fair compensation for Indigenous contributions. Some companies have pledged to reinvest in communities where valuable natural products are sourced. However, for the Rapa Nui, pharmaceutical companies that have directly profited from rapamycin have not yet made such an acknowledgment.

Ultimately, METEI is a story of both scientific triumph and social ambiguities. While the discovery of rapamycin has transformed medicine, the expedition’s impact on the Rapa Nui people is more complicated. I believe issues of biomedical consent, scientific colonialism and overlooked contributions highlight the need for a more critical examination and awareness of the legacy of breakthrough scientific discoveries.

Ted Powers, Professor of Molecular and Cellular Biology, University of California, Davis

This article is republished from The Conversation under a Creative Commons license. Read the original article.

ooOOoo

Ted Powers explains in the last paragraph: “Ultimately, METEI is a story of both scientific triumph and social ambiguities.” Then goes on to say: “I believe issues of biomedical consent, scientific colonialism and overlooked contributions highlight the need for a more critical examination and awareness of the legacy of breakthrough scientific discoveries.”

If only it was simple!

That magical night sky

Or more to the point of this article: Dark Matter.

Along with huge numbers of other people, I have long been interested in the Universe. Thus this article from The Conversation seemed a good one to share with you.

ooOOoo

When darkness shines: How dark stars could illuminate the early universe

NASA’s James Webb Space Telescope has spotted some potential dark star candidates. NASA, ESA, CSA, and STScI

Alexey A. Petrov, University of South Carolina

Scientists working with the James Webb Space Telescope discovered three unusual astronomical objects in early 2025, which may be examples of dark stars. The concept of dark stars has existed for some time and could alter scientists’ understanding of how ordinary stars form. However, their name is somewhat misleading.

“Dark stars” is one of those unfortunate names that, on the surface, does not accurately describe the objects it represents. Dark stars are not exactly stars, and they are certainly not dark.

Still, the name captures the essence of this phenomenon. The “dark” in the name refers not to how bright these objects are, but to the process that makes them shine — driven by a mysterious substance called dark matter. The sheer size of these objects makes it difficult to classify them as stars.

As a physicist, I’ve been fascinated by dark matter, and I’ve been trying to find a way to see its traces using particle accelerators. I’m curious whether dark stars could provide an alternative method to find dark matter.

What makes dark matter dark?

Dark matter, which makes up approximately 27% of the universe but cannot be directly observed, is a key idea behind the phenomenon of dark stars. Astrophysicists have studied this mysterious substance for nearly a century, yet we haven’t seen any direct evidence of it besides its gravitational effects. So, what makes dark matter dark?

A pie chart showing the composition of the universe. The largest proportion is 'dark energy,' at 68%, while dark matter makes up 27% and normal matter 5%. The rest is neutrinos, free hydrogen and helium and heavy elements.
Despite physicists not knowing much about it, dark matter makes up around 27% of the universe. Visual Capitalist/Science Photo Library via Getty Images

Humans primarily observe the universe by detecting electromagnetic waves emitted by or reflected off various objects. For instance, the Moon is visible to the naked eye because it reflects sunlight. Atoms on the Moon’s surface absorb photons – the particles of light – sent from the Sun, causing electrons within atoms to move and send some of that light toward us.

More advanced telescopes detect electromagnetic waves beyond the visible spectrum, such as ultraviolet, infrared or radio waves. They use the same principle: Electrically charged components of atoms react to these electromagnetic waves. But how can they detect a substance – dark matter – that not only has no electric charge but also has no electrically charged components?

Although scientists don’t know the exact nature of dark matter, many models suggest that it is made up of electrically neutral particles – those without an electric charge. This trait makes it impossible to observe dark matter in the same way that we observe ordinary matter.

Dark matter is thought to be made of particles that are their own antiparticles. Antiparticles are the “mirror” versions of particles. They have the same mass but opposite electric charge and other properties. When a particle encounters its antiparticle, the two annihilate each other in a burst of energy.

If dark matter particles are their own antiparticles, they would annihilate upon colliding with each other, potentially releasing large amounts of energy. Scientists predict that this process plays a key role in the formation of dark stars, as long as the density of dark matter particles inside these stars is sufficiently high. The dark matter density determines how often dark matter particles encounter, and annihilate, each other. If the dark matter density inside dark stars is high, they would annihilate frequently.

What makes a dark star shine?

The concept of dark stars stems from a fundamental yet unresolved question in astrophysics: How do stars form? In the widely accepted view, clouds of primordial hydrogen and helium — the chemical elements formed in the first minutes after the Big Bang, approximately 13.8 billion years ago — collapsed under gravity. They heated up and initiated nuclear fusion, which formed heavier elements from the hydrogen and helium. This process led to the formation of the first generation of stars.

Two bright clouds of gas condensing around a small central region
Stars form when clouds of dust collapse inward and condense around a small, bright, dense core. NASA, ESA, CSA, and STScI, J. DePasquale (STScI), CC BY-ND

In the standard view of star formation, dark matter is seen as a passive element that merely exerts a gravitational pull on everything around it, including primordial hydrogen and helium. But what if dark matter had a more active role in the process? That’s exactly the question a group of astrophysicists raised in 2008.

In the dense environment of the early universe, dark matter particles would collide with, and annihilate, each other, releasing energy in the process. This energy could heat the hydrogen and helium gas, preventing it from further collapse and delaying, or even preventing, the typical ignition of nuclear fusion.

The outcome would be a starlike object — but one powered by dark matter heating instead of fusion. Unlike regular stars, these dark stars might live much longer because they would continue to shine as long as they attracted dark matter. This trait would make them distinct from ordinary stars, as their cooler temperature would result in lower emissions of various particles.

Can we observe dark stars?

Several unique characteristics help astronomers identify potential dark stars. First, these objects must be very old. As the universe expands, the frequency of light coming from objects far away from Earth decreases, shifting toward the infrared end of the electromagnetic spectrum, meaning it gets “redshifted.” The oldest objects appear the most redshifted to observers.

Since dark stars form from primordial hydrogen and helium, they are expected to contain little to no heavier elements, such as oxygen. They would be very large and cooler on the surface, yet highly luminous because their size — and the surface area emitting light — compensates for their lower surface brightness.

They are also expected to be enormous, with radii of about tens of astronomical units — a cosmic distance measurement equal to the average distance between Earth and the Sun. Some supermassive dark stars are theorized to reach masses of roughly 10,000 to 10 million times that of the Sun, depending on how much dark matter and hydrogen or helium gas they can accumulate during their growth.

So, have astronomers observed dark stars? Possibly. Data from the James Webb Space Telescope has revealed some very high-redshift objects that seem brighter — and possibly more massive — than what scientists expect of typical early galaxies or stars. These results have led some researchers to propose that dark stars might explain these objects.

Artist's impression of the James Webb telescope, which has a hexagonal mirror made up of smaller hexagons, and sits on a rhombus-shaped spacecraft.
The James Webb Space Telescope, shown in this illustration, detects light coming from objects in the universe. Northrup Grumman/NASA

In particular, a recent study analyzing James Webb Space Telescope data identified three candidates consistent with supermassive dark star models. Researchers looked at how much helium these objects contained to identify them. Since it is dark matter annihilation that heats up those dark stars, rather than nuclear fusion turning helium into heavier elements, dark stars should have more helium.

The researchers highlight that one of these objects indeed exhibited a potential “smoking gun” helium absorption signature: a far higher helium abundance than one would expect in typical early galaxies.

Dark stars may explain early black holes

What happens when a dark star runs out of dark matter? It depends on the size of the dark star. For the lightest dark stars, the depletion of dark matter would mean gravity compresses the remaining hydrogen, igniting nuclear fusion. In this case, the dark star would eventually become an ordinary star, so some stars may have begun as dark stars.

Supermassive dark stars are even more intriguing. At the end of their lifespan, a dead supermassive dark star would collapse directly into a black hole. This black hole could start the formation of a supermassive black hole, like the kind astronomers observe at the centers of galaxies, including our own Milky Way.

Dark stars might also explain how supermassive black holes formed in the early universe. They could shed light on some unique black holes observed by astronomers. For example, a black hole in the galaxy UHZ-1 has a mass approaching 10 million solar masses, and is very old – it formed just 500 million years after the Big Bang. Traditional models struggle to explain how such massive black holes could form so quickly.

The idea of dark stars is not universally accepted. These dark star candidates might still turn out just to be unusual galaxies. Some astrophysicists argue that matter accretion — a process in which massive objects pull in surrounding matter — alone can produce massive stars, and that studies using observations from the James Webb telescope cannot distinguish between massive ordinary stars and less dense, cooler dark stars.

Researchers emphasize that they will need more observational data and theoretical advancements to solve this mystery.

Alexey A. Petrov, Professor of physics and astronomy, University of South Carolina

This article is republished from The Conversation under a Creative Commons license. Read the original article.

ooOOoo

Alexey Petrov says at the end of the article that more observations are required before we humans know all the answers. I have no doubt that in time we will have the answers.

Me sharing a political interview

It is not something I have done before.

This is a blog about dogs in the main and many different subjects as well. For example, I am very interested in the formation of the planet; see the post coming up soon.

However, my good buddy, Dan Gomez, a Californian, sent me a link to an interview, and I quote “After the Israel-Hamas deal was signed earlier this month, Jared Kushner and Steve Witkoff, President Trump’s envoys and the leading brokers of the agreement, sat down with Lesley Stahl to discuss their unconventional deal-driven approach.”

It is a 60 Minutes interview.

I found it most interesting and completely at odds with the majority of all types of media that think that President Trump is despicable.

My view of politicians of democracies is that 99% of them are talkers. Presumably, Trump is a doer.

I would be interested to hear what others think, especially those who were born in the U.S.A.