Graham Greene, the Canadian First Nations actor who starred in films including Dances With Wolves, has died aged 73, his manager says.
“It is with deep sadness we announce the peaceful passing of award-winning legendary Canadian actor Graham Greene,” Gerry Jordan said in a statement to CBC News. The outlet reported he died of natural causes.
Greene scored an Academy Award nomination for Best Supporting Actor for his role in Kevin Costner’s 1990 epic western, where he played Kicking Bird.
He was a member of the Oneida Nation, part of the Six Nations Reserve in southern Ontario.
Greene worked as a draftsman, civil technologist, steelworker and rock-band crew member before starting his career in theatre in the UK in the 1970s.
In a 2012 interview with Canadian publication Playback, he credited theatre with giving him a grounding for acting.
“It helps you build a character. When you get into film you don’t have that luxury. The discipline of theatre is what I recommend to all actors.”
In the same interview, he said a key moment for him came when he married his wife Hilary Blackmore, which led to “the best time of my life”.
His breakthrough came in 1990 when he played Kicking Bird, a Lakota medicine man, in Dances With Wolves. Greene won widespread acclaim for the role.
He also appeared in the 1992 western thriller Thunderheart, playing tribal officer Walter Crow Horse.
In the 1999 fantasy drama The Green Mile, Greene played Arlen Bitterbuck, a Native American man on death row in prison.
He also starred in Die Hard With A Vengeance (1995), Maverick (1994), The Twilight Saga: New Moon (2009) and Wind River (2017).
He picked up numerous awards through his storied career, including the Earle Grey Award for Lifetime Achievement by the Academy of Canadian Film and Television in 2004.
In 2016, he was inducted into the Order of Canada, the country’s second highest civilian honour.
An interesting article about the benefits of being active.
I try and stay as active as I can mainly by bicycle riding. This article from The Conversation shows the importance of this. It is just a shame that they do not mention being old and active; as in being 80!
ooOOoo
Some pro athletes keep getting better as they age − neuroscience can explain how they stay sharp
Recovery and mental resilience support the development of neuroplasticity, which helps athletes like Allyson Felix stay sharp. AP Photo/Charlie Riedel
In a world where sports are dominated by youth and speed, some athletes in their late 30s and even 40s are not just keeping up – they are thriving.
Novak Djokovic is still outlasting opponents nearly half his age on tennis’s biggest stages. LeBron James continues to dictate the pace of NBA games, defending centers and orchestrating plays like a point guard. Allyson Felix won her 11th Olympic medal in track and field at age 35. And Tom Brady won a Super Bowl at 43, long after most NFL quarterbacks retire.
The sustained excellence of these athletes is not just due to talent or grit – it’s biology in action. Staying at the top of their game reflects a trainable convergence of brain, body and mindset. I’m a performance scientist and a physical therapist who has spent over two decades studying how athletes train, taper, recover and stay sharp. These insights aren’t just for high-level athletes – they hold true for anyone navigating big life changes or working to stay healthy.
Increasingly, research shows that the systems that support high performance – from motor control to stress regulation, to recovery – are not fixed traits but trainable capacities. In a world of accelerating change and disruption, the ability to adapt to new changes may be the most important skill of all. So, what makes this adaptability possible – biologically, cognitively and emotionally?
The amygdala and prefrontal cortex
Neuroscience research shows that with repeated exposure to high-stakes situations, the brain begins to adapt. The prefrontal cortex – the region most responsible for planning, focus and decision-making – becomes more efficient in managing attention and making decisions, even under pressure.
During stressful situations, such as facing match point in a Grand Slam final, this area of the brain can help an athlete stay composed and make smart choices – but only if it’s well trained.
In contrast, the amygdala, our brain’s threat detector, can hijack performance by triggering panic, freezing motor responses or fueling reckless decisions. With repeated exposure to high-stakes moments, elite athletes gradually reshape this brain circuit.
They learn to tune down amygdala reactivity and keep the prefrontal cortex online, even when the pressure spikes. This refined brain circuitry enables experienced performers to maintain their emotional control.
Creating a brain-body loop
Brain-derived neurotrophic factor, or BDNF, is a molecule that supports adapting to changes quickly. Think of it as fertilizer for the brain. It enhances neuroplasticity: the brain’s ability to rewire itself through experience and repetition. This rewiring helps athletes build and reinforce the patterns of connections between brain cells to control their emotion, manage their attention and move with precision.
In moments like these, higher BDNF availability likely allows him to regulate his emotions and recalibrate his motor response, helping him to return to peak performance faster than his opponent.
Rewiring your brain
In essence, athletes who repeatedly train and compete in pressure-filled environments are rewiring their brain to respond more effectively to those demands. This rewiring, from repeated exposures, helps boost BDNF levels and in turn keeps the prefrontal cortex sharp and dials down the amygdala’s tendency to overreact.
This kind of biological tuning is what scientists call cognitive reserve and allostasis – the process the body uses to make changes in response to stress or environmental demands to remain stable. It helps the brain and body be flexible, not fragile.
Importantly, this adaptation isn’t exclusive to elite athletes. Studies on adults of all ages show that regular physical activity – particularly exercises that challenge both body and mind – can raise BDNF levels, improve the brain’s ability to adapt and respond to new challenges, and reduce stress reactivity.
Programs that combine aerobic movement with coordination tasks, such as dancing, complex drills or even fast-paced walking while problem-solving have been shown to preserve skills such as focus, planning, impulse control and emotional regulation over time.
After an intense training session or a match, you will often see athletes hopping on a bike or spending some time in the pool. These low-impact, gentle movements, known as active recovery, help tone down the nervous system gradually.
Serbian tennis player Novak Djokovic practices meditation, which strengthens the mental pathways that help with stress regulation. AP Photo/Kin Cheung
Over time, this convergence creates a trainable loop between the brain and body that is better equipped to adapt, recover and perform.
Lessons beyond sport
While the spotlight may shine on sporting arenas, you don’t need to be a pro athlete to train these same skills.
The ability to perform under pressure is a result of continuing adaptation. Whether you’re navigating a career pivot, caring for family members, or simply striving to stay mentally sharp as the world changes, the principles are the same: Expose yourself to challenges, regulate stress and recover deliberately.
While speed, agility and power may decline with age, some sport-specific skills such as anticipation, decision-making and strategic awareness actually improve. Athletes with years of experience develop faster mental models of how a play will unfold, which allows them to make better and faster choices with minimal effort. This efficiency is a result of years of reinforcing neural circuits that doesn’t immediately vanish with age. This is one reason experienced athletes often excel even if they are well past their physical prime.
Physical activity, especially dynamic and coordinated movement, boosts the brain’s capacity to adapt. So does learning new skills, practicing mindfulness and even rehearsing performance under pressure. In daily life, this might be a surgeon practicing a critical procedure in simulation, a teacher preparing for a tricky parent meeting, or a speaker practicing a high-stakes presentation to stay calm and composed when it counts. These aren’t elite rituals – they’re accessible strategies for building resilience, motor efficiency and emotional control.
Humans are built to adapt – with the right strategies, you can sustain excellence at any stage of life.
Earlier this month, a homeowner called Tidewater Wildlife Rescue with an urgent request. A common garter snake was hopelessly tangled in a piece of netting in their yard. Could someone come help?
Rescue volunteer Serenity Reiner quickly headed to the scene.
TIDEWATER WILDLIFE RESCUE
Reiner and her rescue partner, Daniel, used scissors to cut away big pieces of the net. Then, Daniel gently held the snake as Reiner snipped away netting closer to the animal’s body.
“We were very focused,” Reiner told The Dodo. “We wanted to be as fast as possible to limit [her] stress.”
The rescuers were almost finished when they noticed something amazing — the snake was giving birth in their hands.
TIDEWATER WILDLIFE RESCUE
Reiner hastily removed the remaining netting as the mama snake birthed two babies. Then, she took the snake and her little ones to a wooded area behind the house and released them back into the wild.
Surprisingly, despite their size, baby garter snakes don’t need to live with their mom for very long. In fact, as the rescue notes, these young snakes are completely independent from the moment they’re born and can immediately find food on their own.
TIDEWATER WILDLIFE RESCUE
According to the U.S. National Park Service, garter snakes typically give birth to 15-40 babies at a time. Reiner suspects this mama welcomed many more little ones into the world once she was safe in the forest.
The rescuer encouraged the homeowners to use animal-safe netting next time. She’s grateful that, in this case, everything turned out OK.
“I felt so much joy knowing that she was able to go back to her normal life unharmed,” Reiner said.
It is what we share with animals, but it is not as straightforward as one thinks!
The range of thinking, in terms of logical thinking, even in humans, is enormous. And when we watch animals, especially mammals, it is clear that they are operating in a logical manner. By ‘operating’ I am referring to their thought processes.
So a recent article in The Conversation jumped out at me. Here it is:
ooOOoo
Humans and animals can both think logically − but testing what kind of logic they’re using is tricky
Can a monkey, a pigeon or a fish reason like a person? It’s a question scientists have been testing in increasingly creative ways – and what we’ve found so far paints a more complicated picture than you’d think.
Imagine you’re filling out a March Madness bracket. You hear that Team A beat Team B, and Team B beat Team C – so you assume Team A is probably better than Team C. That’s a kind of logical reasoning known as transitive inference. It’s so automatic that you barely notice you’re doing it.
It turns out humans are not the only ones who can make these kinds of mental leaps. In labs around the world, researchers have tested many animals, from primates to birds to insects, on tasks designed to probe transitive inference, and most pass with flying colors.
As a scientist focused on animal learning and behavior, I work with pigeons to understand how they make sense of relationships, patterns and rules. In other words, I study the minds of animals that will never fill out a March Madness bracket – but might still be able to guess the winner.
Logic test without words
The basic idea is simple: If an animal learns that A is better than B, and B is better than C, can it figure out that A is better than C – even though it’s never seen A and C together?
In the lab, researchers test this by giving animals randomly paired images, one pair at a time, and rewarding them with food for picking the correct one. For example, animals learn that a photo of hands (A) is correct when paired with a classroom (B), a classroom (B) is correct when paired with bushes (C), bushes (C) are correct when paired with a highway (D), and a highway (D) is correct when paired with a sunset (E). We don’t know whether they “understand” what’s in the picture, and it is not particularly important for the experiment that they do.
In a transitive inference task, subjects learn a series of rewarded pairs – such as A+ vs. B–, B+ vs. C– – and are later tested on novel pairings, like B vs. D, to see whether they infer an overall ranking. Olga Lazareva, CC BY-ND
One possible explanation is that the animals that learn all the tasks create a mental ranking of these images: A > B > C > D > E. We test this idea by giving them new pairs they’ve never seen before, such as classroom (B) vs. highway (D). If they consistently pick the higher-ranked item, they’ve inferred the underlying order.
What’s fascinating is how many species succeed at this task. Monkeys, rats, pigeons – even fish and wasps – have all demonstrated transitive inference in one form or another.
The twist: Not all tasks are easy
But not all types of reasoning come so easily. There’s another kind of rule called transitivity that is different from transitive inference, despite the similar name. Instead of asking which picture is better, transitivity is about equivalence.
In this task, animals are shown a set of three pictures and asked which one goes with the center image. For example, if white triangle (A1) is shown, choosing red square (B1) earns a reward, while choosing blue square (B2) does not. Later, when red square (B1) is shown, choosing white cross (C1) earns a reward while choosing white circle (C2) does not. Now comes the test: white triangle (A1) is shown with white cross (C1) and white circle (C2) as choices. If they pick white cross (C1), then they’ve demonstrated transitivity.
In a transitivity task, subjects learn matching rules across overlapping sets – such as A1 matches B1, B1 matches C1 – and are tested on new combinations, such as A1 with C1 or C2, to assess whether they infer the relationship between A1 and C1. Olga Lazareva, CC BY-ND
The change may seem small, but species that succeed in those first transitive inference tasks often stumble in this task. In fact, they tend to treat the white triangle and the white cross as completely separate things, despite their common relationship with the red square. In my recently published review of research using the two tasks, I concluded that more evidence is needed to determine whether these tests tap into the same cognitive ability.
Small differences, big consequences
Why does the difference between transitive inference and transitivity matter? At first glance, they may seem like two versions of the same ability – logical reasoning. But when animals succeed at one and struggle with the other, it raises an important question: Are these tasks measuring the same kind of thinking?
The apparent difference between the two tasks isn’t just a quirk of animal behavior. Psychology researchers apply these tasks to humans in order to draw conclusions about how people reason.
For example, say you’re trying to pick a new almond milk. You know that Brand A is creamier than Brand B, and your friend told you that Brand C is even waterier than Brand B. Based on that, because you like a thicker milk, you might assume Brand A is better than Brand C, an example of transitive inference.
But now imagine the store labels both Brand A and Brand C as “barista blends.” Even without tasting them, you might treat them as functionally equivalent, because they belong to the same category. That’s more like transitivity, where items are grouped based on shared relationships. In this case, “barista blend” signals the brands share similar quality.
Researchers often treat these types of reasoning as measuring the same ability. But if they rely on different mental processes, they might not be interchangeable. In other words, the way scientists ask their questions may shape the answer – and that has big implications for how they interpret success in animals and in people.
This difference could affect how researchers interpret decision-making not only in the lab, but also in everyday choices and in clinical settings. Tasks like these are sometimes used in research on autism, brain injury or age-related cognitive decline.
If two tasks look similar on the surface, then choosing the wrong one might lead to inaccurate conclusions about someone’s cognitive abilities. That’s why ongoing work in my lab is exploring whether the same distinction between these logical processes holds true for people.
Just like a March Madness bracket doesn’t always predict the winner, a reasoning task doesn’t always show how someone got to the right answer. That’s the puzzle researchers are still working on – figuring out whether different tasks really tap into the same kind of thinking or just look like they do. It’s what keeps scientists like me in the lab, asking questions, running experiments and trying to understand what it really means to reason – no matter who’s doing the thinking.
As many of you know I flew during my years when I was based in England. I flew as a hobby. Very quickly I realised that looking at the ground from a few thousand feet up gave one a unique view of the landscape.
ooOOoo
50 Years of Flying for Heritage
Damian Grady
Damian Grady is the Historic England Aerial Reconnaissance Manager. He joined the Royal Commission on the Historical Monuments of England in 1990 to map archaeology from aerial photographs and from 1998 became responsible for managing the aerial reconnaissance programme.
Published 8 February 2017
On Wednesday 8 February 2017 Historic England celebrated 50 years of our flying programme. Since those early days in 1967 much has changed, but reconnaissance, the act of flying to record and monitor sites and landscapes of archaeological interest, is still at the heart of the work carried out by our research teams.
This article was originally written in 2017 to mark the 50th anniversary but we have kept it live as it continues to be read and enjoyed.
On a cold February afternoon in 1967 an Auster, four-seater, light aircraft took off from Fairoaks airfield on the outskirts of south west London. This was the first test flight of the Royal Commission on the Historical Monuments of England (RCHME), one of the predecessors of Historic England. On-board was the pilot, a photographer, Ron Parsons and John Hampton. John was responsible for setting up the RCHME Air Photo Library in 1965 to implement the commission’s resolution (1964) to “use air photography to build up rapidly a record of field monuments throughout England.”
In the beginning this involved acquiring aerial photographs to build up a library of images of archaeological sites. By 1967 it was felt that RCHME should take its own oblique aerial photographs in support of its field survey work. Oblique photographs are taken at an “oblique” angle to the ground, as opposed to directly from above. They are usually taken with a hand held camera through the open window of a plane. The main target at this time was cropmarks; the walls and ditches of buried archaeology can affect the rate at which plants grow over them, causing differences in colour and height. These cropmarks are not always visible on the ground, so the best way to look for them is from the air.
This first flight was very much an experiment. Flying from Fairoaks to Basingstoke, Tidbury Ring and back they photographed prehistoric sites on the chalk soils of Hampshire. Many of the sites had been ploughed recently and were seen as colour differences in the soil and germinating crops. During the 1.5 hour flight John Hampton learned a number of valuable lessons, such as the best height to fly, the best angle to use and to make sure there was plenty of film! The lessons learned from this and subsequent flights formed the foundation of 50 years of flying by the aerial reconnaissance team in RCHME, English Heritage and now Historic England.
Growth of the archive
At about the same time as this first flight the collection of aerial photographs grew with the arrival of the Crawford Collection from the Ordnance Survey. Later, in the 1970s, the Air Photo library acquired many more aerial photographs from archaeologists and private fliers keen to discover archaeological sites. One such flier was Derrick Riley who took this photograph of an Iron Age/Roman field system in Nottinghamshire.
The oblique photographs acquired and taken by RCHME were ordered by kilometre square and stored in distinctive red boxes. Then in the 1980s there was a rapid growth with the acquisition of the Department of the Environment collection of vertical aerial photos. This collection included all prints taken of England by the RAF since the start of WWII such as the image below. This shows the airfield at Biggin Hill, near London with evidence of the many bomb craters sustained during German air raids. Further expansion came in the 1990s with the acquisition of the Ordnance Survey archive and in 2007 with the Aerofilms collection.
Photo mosaic of RAF images of Biggin Hill airfield taken on 27 June 1941, showing a camouflaged runway and filled in bomb craters (RAF_241_72 and 73). Source: Historic England Archive (RAF collection).
Mapping from aerial photographs
In the 1970s John Hampton and his team looked at various ways of interpreting and mapping from the aerial photographs taken by RCHME and acquired from local fliers. Along with others, they experimented with a variety of mapping techniques from sketch plotting to photogrammetry. An important step in the development of this process was the project to map the archaeology around the Iron Age hillfort of Danebury. This approach was scaled up by RCHME to map the prehistoric archaeology visible as cropmarks on the Yorkshire Wolds. This project used computer aided rectification of oblique aerial photographs, a process that was being developed.
In the late 1980s, as the archive acquired more aerial photographs, RCHME developed a systematic methodology to interpret, map and record all archaeological features, not just cropmarks, visible on aerial photographs. Pilot projects in Kent, Hertfordshire and the Thames Valley were set up to develop the methodology further.
In the 1990s the range of subjects photographed increased as RCHME used aerial photographs to record the large building complexes they were surveying that were undergoing major changes at the time. These included textile mills, hospitals, prisons and Cold War military sites. For some of these sites such as the textile mill below in Leeds, these photographs are the last record we have as development pressures have since led to their demolition.
The 1990s also saw new discoveries across the country especially in the hot summers of 1995 and 1996. Below is just one such site, a “banjo” enclosure, so called because of the shape; a circular enclosure with a long funnel neck leading into it. See other examples of new sites discovered in the 1990s and at other times in the gallery below.
In the 1990s the political changes and opening up of eastern Europe led to archaeologists visiting the survey and archive teams to learn from our experience of flying, mapping and archiving aerial photographs. This led to us joining forces with other aerial archaeologists from western Europe to set up training courses in Hungary and Poland. This in turn led to further work exchanges and training courses across Europe.
The late 1990s saw RCHME and EH working together to supply aerial photographs to help Field Monument Wardens monitor the condition of scheduled monuments. Following the merger of the two organisations in 1999 this became an important aspect of the flying programme. In the image below the World War Two anti-aircraft battery might appear to be safe since it has been removed from the cultivation that surrounds it. However, it is still at risk from being overgrown by scrub.
The new century saw important technological developments taken up by the flying and mapping teams. The reconnaissance teams began experimenting with digital cameras in 2003 and the archive developed standards for the long term preservation of digital data. The last negative film shot in the air was 2006. The archive now holds over 200,000 digital aerial photographs taken by the reconnaissance teams.
In 2001 English Heritage used lidar, a system of airborne laser scanning, for a review of mapping of the Stonehenge World Heritage site. Since then HE have developed our use of the data and now use it as a regular source for any mapping and interpretation projects.
The discovery of new archaeological sites is still the most exciting part of the flying programme, but since the first flight in 1967 the scale, range and scope has changed. New sensors and camera technology are allowing us to look at new ways of taking aerial photographs. New software and access to other aerial data such as lidar allows us to see, map and understand the historic landscape in ways that could only have been dreamed about in 1967.
Since our systematic analysis of new and archive aerial photographs began in the late 1990s we have discovered over 122,000 new archaeological sites like the one above.
ooOOoo
I very much hope that republishing this article is in order. An email to the Press Office of Historic England requesting permission has been sent last Sunday afternoon.
This morning, 30th July, I received the following email:
To use the aerial images you have seen on our webpage ’50 Years Flying’ athttps://historicengland.org.uk/whats-new/research/50-years-flying/ , please make a note of the image reference numbers and then visit our Aerial Photography Explorer website at https://historicengland.org.uk/images-books/archive/collections/aerial-photos/ . If you then navigate to the oblique image search screen and fill in the reference number under the ‘file contains’ tab you will be taken to that image. By then hovering over that image a share/embed option will appear that will allow you to use the images free of charge on non-commercial websites and some social media sites such as X and Facebook. Our reference is 150356.
Last Sunday morning I listened to a BBC Radio 4 programme The Dark Enlightenment. Here is a summary from the BBC website:
A radical political philosophy founded by a software engineer called Curtis Yarvin is gaining in influence, and said to be shaping Donald Trump’s second term in the White House.
It is on BBC Sounds. Here is the link: BBC Currently.
The plan was simple. It started by retiring all government employees by offering them incentives to leave and never return. To avoid anarchy and keep authority, the police and military would be retained.
Government funds would be seized and the money redirected to more worthwhile pursuits. Court orders pushing back against these measures as “unconstitutional” should be summarily ignored. The press should be massaged and censored as necessary. Finally, universities, scientific institutions, and NGOs should also be snapped off, their funding terminated.
These moves resemble many made (or attempted) in the first 100 days of the second Trump administration. But they were all laid out in 2012 by a single person: Curtis Yarvin.
In the past five years, Yarvin’s reactionary blueprints for governance have found powerful backers in both Silicon Valley and Washington circles.
His ideas have been taken up and repeated in various ways by Peter Thiel (PayPal), Elon Musk (X, Tesla), Alexander Karp (Palantir) and other founders, CEOs and thought-leaders within the broader tech industry. He was a guest at Trump’s Coronation Ball in January.
Yarvin’s current newsletter, Grey Room, now boasts 57,000 subscribers. “Curtis Yarvin’s Ideas Were Fringe,” cautioned a recent article, “Now They’re Coursing Through Trump’s Washington.”
JD Vance has praised Yarvin by name and echoed his ideas, calling for a ‘de-wokification programme’. Bonnie Cash/Pool/AAP
Rebooting the state
Yarvin, a 51-year old computer engineer, has been publishing his thoughts on politics for close to 20 years. His original blog, launched in 2007, introduced his potent blend of “the modern engineering mentality, and the great historical legacy of antique, classical and Victorian pre-democratic thought”. Last week, The Washington Post called it “required reading for the extremely online right”.
Democracy was dead and doomed from the beginning, Yarvin argued in his blog, in quippy, Reddit-style prose. Governance should look to other mechanisms (tech) and modes (monarchism) for inspiration.
The state needs a “hard reboot,” asserted Yarvin. “Democratic elections are entirely superfluous to the mechanism of government” he argued. “A vote for democratic or republican matters a little bit,” he admitted, but “basically if the whole electoral system disappeared, Washington would go on running in exactly the same ways”.
Curtis Yarvin. Wikipedia
For Yarvin, then, it is not just the government that must change – a superficial swap of parties and politicians – but something far more fundamental: the form of government. Democracy was beta tested and failed to deliver. The political operating system must be ripped out and replaced.
While elements (like the term “red pill”) travelled far beyond its pages, Yarvin’s ideas remained on the fringes until recently, with their growing popularity pushing him into the limelight. Last week he hit the headlines due to his debate at Harvard, a place that has become a “symbol of resistance to Trump”, with political theorist Danielle Allen, a democracy advocate.
Allen, who debated Yarvin to provide students with “help thinking about intellectual material”, wrote after the debate that he correctly diagnoses a problem, but not its causes or solutions:
He is right that our political institutions are failing. He is also right that their members have failed to see the depth of our governance problems and their own contributions to them through technocracy and political correctness. […] But Mr. Yarvin leads them astray with his vision of absolute monarchy and racial cleansing.
A technological republic
For Yarvin and others like him, democracy’s fatal flaw is the demos (or, people) itself. Trusting the agency and ability of citizens to govern through representation is naive, Yarvin believes. Alexander Karp, CEO of Palantir, a firm that provides military and intelligence agencies with big data “intelligence”, agrees.
“Why must we always defer to the wisdom of the crowd when it comes to allocating scarce capital in a market economy?” Karp asked in his recent bestseller, The Technological Republic.
For Yarvin, Karp, Thiel and the other elites that embrace these ideas, the people are idiots. A favourite quote (likely apocryphal) is from Churchill, stating the best argument against democracy is a five-minute conversation with the average voter.
If a legacy republic was one by the people and for the people, Karp argues a technological republic will “require the rebuilding of an ownership society, a founder culture that came from tech but has the potential to reshape government”.
In this vision, the state shapeshifts into something sleeker, more successful, more like a startup: the corporation. “A government is just a corporation that owns a country,” Yarvin stresses. Musk has echoed this line: “the government is simply the largest corporation”.
But if this is true, it is a pathetic one, according to its hyper-capitalist detractors: bloated with waste, saddled with debt and slowed by regulation. The state is a dinosaur which makes incremental change and must tread with caution, bending to the needs of its constituents. Founders dictate their commands and impose their will.
Dark enlightenment
“Once the universe of democratic corruption is converted into a (freely transferable) shareholding in gov-corp the owners of the state can initiate rational corporate governance, beginning with the appointment of a CEO,” explains philosopher Nick Land.
“As with any business, the interests of the state are now precisely formalized as the maximization of long-term shareholder value.” In this model, the president becomes the CEO king; the citizen becomes the customer or user.
Land, more than any other, has provided the philosophical cachet around this movement, taking Yarvin’s quippy but fuzzy prose and formalising it into the political and philosophical formation known as neoreaction or the “Dark Enlightenment”, with a sprawling 2014 essay that moves from the death of the west to racial terror, the limits of freedom and the next stage of human evolution.
Nick Land. GoodReads
Land, variously regarded as a cybernetic prophet or scientific racist, has long held anti-humanist and anti-democratic views. “Voice”, or representation – the key tenet of liberal democracy – has been tried and failed, Land argues. The only viable alternative is “exit”: flight from failed governance altogether, into a post-political and post-human future.
To simplify drastically: democracy’s naive belief in equality for all – propped up and policed by the array of humanitarian organisations, government agencies and woke culture warriors that Yarvin sneeringly dubs “The Cathedral” – has held capitalism back from its true potential.
Technological fascism
For Land, Yarvin and others, optimal rule would be both hypercapitalist and hyperconservative: a hybrid political order I’ve begun to research and conceptualise as technological fascism.
Technological fascism gazes to the future and past for inspiration. It couples, in the words of writer Jacob Siegel:
the classic anti-modern, anti-democratic worldview of 18th-century reactionaries to a post-libertarian ethos that embraced technological capitalism as the proper means for administering society.
In this vision, the best form of governance marries reaction and information, Machiavelli and machine learning, aristocracy and artificial intelligence, authoritarianism and technosolutionism.
To revive the glorious traditions of the past, its champions believe, we must leverage the bleeding-edge innovations of tomorrow.
Governing like a monarch
This culture is already infiltrating Washington. Trump is governing like a monarch, making unilateral decisions via hundreds of executive orders, bulldozing through opposition and legislation.
Musk and his DOGE minions stress they need to “delete entire agencies”, commandeering offices and allegedly stealing data under the pretext of eliminating “waste”.
A recent study of over 500 political scientists found “the vast majority think the US is moving swiftly away from liberal democracy toward some form of authoritarianism”.
In the vision laid out by Yarvin – and taken up more and more by a growing political vanguard – government is either a political inconvenience or a technical problem. Increasingly, the authoritarian imperative to impose absolute rule and the Silicon Valley mantra of “moving fast and breaking stuff” dovetail into a disturbing single directive.
The scientists who precisely measure the position of Earth are in a bit of trouble. Their measurements are essential for the satellites we use for navigation, communication and Earth observation every day.
But you might be surprised to learn that making these measurements – using the science of geodesy – depends on tracking the locations of black holes in distant galaxies.
The problem is, the scientists need to use specific frequency lanes on the radio spectrum highway to track those black holes.
Satellites and the services they provide have become essential for modern life. From precision navigation in our pockets to measuring climate change, running global supply chains and making power grids and online banking possible, our civilisation cannot function without its orbiting companions.
To use satellites, we need to know exactly where they are at any given time. Precise satellite positioning relies on the so-called “global geodesy supply chain”.
This supply chain starts by establishing a reliable reference frame as a basis for all other measurements. Because satellites are constantly moving around Earth, Earth is constantly moving around the Sun, and the Sun is constantly moving through the galaxy, this reference frame needs to be carefully calibrated via some relatively fixed external objects.
These black holes are the most distant and stable objects we know. Using a technique called very long baseline interferometry, we can use a network of radio telescopes to lock onto the black hole signals and disentangle Earth’s own rotation and wobble in space from the satellites’ movement.
Different lanes on the radio highway
We use radio telescopes because we want to detect the radio waves coming from the black holes. Radio waves pass cleanly through the atmosphere and we can receive them during day and night and in all weather conditions.
Radio waves are also used for communication on Earth – including things such as wifi and mobile phones. The use of different radio frequencies – different lanes on the radio highway – is closely regulated, and a few narrow lanes are reserved for radio astronomy.
However, in previous decades the radio highway had relatively little traffic. Scientists commonly strayed from the radio astronomy lanes to receive the black hole signals.
To reach the very high precision needed for modern technology, geodesy today relies on more than just the lanes exclusively reserved for astronomy.
Radio traffic on the rise
In recent years, human-made electromagnetic pollution has vastly increased. When wifi and mobile phone services emerged, scientists reacted by moving to higher frequencies.
However, they are running out of lanes. Six generations of mobile phone services (each occupying a new lane) are crowding the spectrum, not to mention internet connections directly sent by a fleet of thousands of satellites.
Today, the multitude of signals are often too strong for geodetic observatories to see through them to the very weak signals emitted by black holes. This puts many satellite services at risk.
What can be done?
To keep working into the future – to maintain the services on which we all depend – geodesy needs some more lanes on the radio highway. When the spectrum is divided up via international treaties at world radio conferences, geodesists need a seat at the table.
Other potential fixes might include radio quiet zones around our essential radio telescopes. Work is also underway with satellite providers to avoid pointing radio emissions directly at radio telescopes.
Any solution has to be global. For our geodetic measurements, we link radio telescopes together from all over the world, allowing us to mimic a telescope the size of Earth. The radio spectrum is primarily regulated by each nation individually, making this a huge challenge.
But perhaps the first step is increasing awareness. If we want satellite navigation to work, our supermarkets to be stocked and our online money transfers arriving safely, we need to make sure we have a clear view of those black holes in distant galaxies – and that means clearing up the radio highway.
To all people that live outside the US, and quite a few as well who don’t!
The Federal Reserve is the central banking system of the USA. I am going to republish most of the article that appears on WikiPedia. It is yet another example of how the United States set itself up taking the best from all around the world.
(And apologies for not posting a Picture Parade yesterday.)
ooOOoo
The Federal Reserve System (often shortened to the Federal Reserve, or simply the Fed) is the central banking system of the United States. It was created on December 23, 1913, with the enactment of the Federal Reserve Act, after a series of financial panics (particularly the panic of 1907) led to the desire for central control of the monetary system in order to alleviate financial crises.[list 1] Although an instrument of the U.S. government, the Federal Reserve System considers itself “an independent central bank because its monetary policy decisions do not have to be approved by the president or by anyone else in the executive or legislative branches of government, it does not receive funding appropriated by Congress, and the terms of the members of the board of governors span multiple presidential and congressional terms.”[11] Over the years, events such as the Great Depression in the 1930s and the Great Recessionduring the 2000s have led to the expansion of the roles and responsibilities of the Federal Reserve System.[6][12]
Congress established three key objectives for monetary policy in the Federal Reserve Act: maximizing employment, stabilizing prices, and moderating long-term interest rates.[13] The first two objectives are sometimes referred to as the Federal Reserve’s dual mandate.[14] Its duties have expanded over the years, and include supervising and regulating banks, maintaining the stability of the financial system, and providing financial services to depository institutions, the U.S. government, and foreign official institutions.[15] The Fed also conducts research into the economy and provides numerous publications, such as the Beige Book and the FRED database.[16]
The Federal Reserve System is composed of several layers. It is governed by the presidentially appointed board of governors or Federal Reserve Board (FRB). Twelve regional Federal Reserve Banks, located in cities throughout the nation, regulate and oversee privately owned commercial banks.[17] Nationally chartered commercial banks are required to hold stock in, and can elect some board members of, the Federal Reserve Bank of their region.
The Federal Open Market Committee (FOMC) sets monetary policy by adjusting the target for the federal funds rate, which generally influences market interest rates and, in turn, US economic activity via the monetary transmission mechanism. The FOMC consists of all seven members of the board of governors and the twelve regional Federal Reserve Bank presidents, though only five bank presidents vote at a time: the president of the New York Fed and four others who rotate through one-year voting terms. There are also various advisory councils.[list 2] It has a structure unique among central banks, and is also unusual in that the United States Department of the Treasury, an entity outside of the central bank, prints the currency used.[23]
The federal government sets the salaries of the board’s seven governors, and it receives all the system’s annual profits after dividends on member banks’ capital investments are paid, and an account surplus is maintained. In 2015, the Federal Reserve earned a net income of $100.2 billion and transferred $97.7 billion to the U.S. Treasury,[24] and 2020 earnings were approximately $88.6 billion with remittances to the U.S. Treasury of $86.9 billion.[25] The Federal Reserve has been criticized for its approach to managing inflation, perceived lack of transparency, and its role in economic downturns.[26][27][28]
Purpose
The primary declared motivation for creating the Federal Reserve System was to address banking panics.[6] Other purposes are stated in the Federal Reserve Act, such as “to furnish an elastic currency, to afford means of rediscounting commercial paper, to establish a more effective supervision of banking in the United States, and for other purposes”.[29] Before the founding of the Federal Reserve System, the United States underwent several financial crises. A particularly severe crisis in 1907 led Congress to enact the Federal Reserve Act in 1913. Today the Federal Reserve System has responsibilities in addition to stabilizing the financial system.[30]
Current functions of the Federal Reserve System include:[15][30]
stable prices, interpreted as an inflation rate of 2 percent per year on average[31]
moderate long-term interest rates
To maintain the stability of the financial system and contain systemic risk in financial markets
To provide financial services to depository institutions, the U.S. government, and foreign official institutions, including playing a major role in operating the nation’s payments system
To facilitate the exchange of payments among regions
Banking institutions in the United States are required to hold reserves—amounts of currency and deposits in other banks—equal to only a fraction of the amount of the bank’s deposit liabilities owed to customers. This practice is called fractional-reserve banking. As a result, banks usually invest the majority of the funds received from depositors. On rare occasions, too many of the bank’s customers will withdraw their savings and the bank will need help from another institution to continue operating; this is called a bank run. Bank runs can lead to a multitude of social and economic problems. The Federal Reserve System was designed as an attempt to prevent or minimize the occurrence of bank runs, and possibly act as a lender of last resort when a bank run does occur. Many economists, following Nobel laureate Milton Friedman, believe that the Federal Reserve inappropriately refused to lend money to small banks during the bank runs of 1929; Friedman argued that this contributed to the Great Depression.[32]
Check clearing system
Because some banks refused to clear checks from certain other banks during times of economic uncertainty, a check-clearing system was created in the Federal Reserve System. It is briefly described in The Federal Reserve System—Purposes and Functions as follows:[33]
By creating the Federal Reserve System, Congress intended to eliminate the severe financial crises that had periodically swept the nation, especially the sort of financial panic that occurred in 1907. During that episode, payments were disrupted throughout the country because many banks and clearinghouses refused to clear checks drawn on certain other banks, a practice that contributed to the failure of otherwise solvent banks. To address these problems, Congress gave the Federal Reserve System the authority to establish a nationwide check-clearing system. The System, then, was to provide not only an elastic currency—that is, a currency that would expand or shrink in amount as economic conditions warranted—but also an efficient and equitable check-collection system.
Lender of last resort
In the United States, the Federal Reserve serves as the lender of last resort to those institutions that cannot obtain credit elsewhere and the collapse of which would have serious implications for the economy. It took over this role from the private sector “clearing houses” which operated during the Free Banking Era; whether public or private, the availability of liquidity was intended to prevent bank runs.[34]
Fluctuations
Through its discount window and credit operations, Reserve Banks provide liquidity to banks to meet short-term needs stemming from seasonal fluctuations in deposits or unexpected withdrawals. Longer-term liquidity may also be provided in exceptional circumstances. The rate the Fed charges banks for these loans is called the discount rate (officially the primary credit rate).
By making these loans, the Fed serves as a buffer against unexpected day-to-day fluctuations in reserve demand and supply. This contributes to the effective functioning of the banking system, alleviates pressure in the reserves market and reduces the extent of unexpected movements in the interest rates.[35] For example, on September 16, 2008, the Federal Reserve Board authorized an $85 billion loan to stave off the bankruptcy of international insurance giant American International Group (AIG).[36]
Obverse of a Federal Reserve $1 note issued in 2009
In its role as the central bank of the United States, the Fed serves as a banker’s bank and as the government’s bank. As the banker’s bank, it helps to assure the safety and efficiency of the payments system. As the government’s bank or fiscal agent, the Fed processes a variety of financial transactions involving trillions of dollars. Just as an individual might keep an account at a bank, the U.S. Treasury keeps a checking account with the Federal Reserve, through which incoming federal tax deposits and outgoing government payments are handled. As part of this service relationship, the Fed sells and redeems U.S. government securitiessuch as savings bonds and Treasury bills, notes and bonds. It also issues the nation’s coinand paper currency. The U.S. Treasury, through its Bureau of the Mint and Bureau of Engraving and Printing, actually produces the nation’s cash supply and, in effect, sells the paper currency to the Federal Reserve Banks at manufacturing cost, and the coins at face value. The Federal Reserve Banks then distribute it to other financial institutions in various ways.[37] During the Fiscal Year 2020, the Bureau of Engraving and Printing delivered 57.95 billion notes at an average cost of 7.4 cents per note.[38][39]
Federal funds are the reserve balances (also called Federal Reserve Deposits) that private banks keep at their local Federal Reserve Bank.[40] These balances are the namesake reserves of the Federal Reserve System. The purpose of keeping funds at a Federal Reserve Bank is to have a mechanism for private banks to lend funds to one another. This market for funds plays an important role in the Federal Reserve System as it is the basis for its monetary policy work. Monetary policy is put into effect partly by influencing how much interest the private banks charge each other for the lending of these funds.
Federal reserve accounts contain federal reserve credit, which can be converted into federal reserve notes. Private banks maintain their bank reserves in federal reserve accounts.
Bank regulation
The Federal Reserve regulates private banks. The system was designed out of a compromise between the competing philosophies of privatization and government regulation. In 2006 Donald L. Kohn, vice chairman of the board of governors, summarized the history of this compromise:[41]
Agrarian and progressive interests, led by William Jennings Bryan, favored a central bank under public, rather than banker, control. However, the vast majority of the nation’s bankers, concerned about government intervention in the banking business, opposed a central bank structure directed by political appointees. The legislation that Congress ultimately adopted in 1913 reflected a hard-fought battle to balance these two competing views and created the hybrid public-private, centralized-decentralized structure that we have today.
The balance between private interests and government can also be seen in the structure of the system. Private banks elect members of the board of directors at their regional Federal Reserve Bank while the members of the board of governors are selected by the president of the United States and confirmed by the Senate.
The Federal Banking Agency Audit Act, enacted in 1978 as Public Law 95-320 and 31 U.S.C. section 714 establish that the board of governors of the Federal Reserve System and the Federal Reserve banks may be audited by the Government Accountability Office (GAO).[42]
The GAO has authority to audit check-processing, currency storage and shipments, and some regulatory and bank examination functions–though there are restrictions to what the GAO may audit. Under the Federal Banking Agency Audit Act, 31 U.S.C. section 714(b), audits of the Federal Reserve Board and Federal Reserve banks do not include (1) transactions for or with a foreign central bank or government or non-private international financing organization; (2) deliberations, decisions, or actions on monetary policy matters; (3) transactions made under the direction of the Federal Open Market Committee; or (4) a part of a discussion or communication among or between members of the board of governors and officers and employees of the Federal Reserve System related to items (1), (2), or (3). See Federal Reserve System Audits: Restrictions on GAO’s Access (GAO/T-GGD-94-44), statement of Charles A. Bowsher.[43]
The board of governors in the Federal Reserve System has a number of supervisory and regulatory responsibilities in the U.S. banking system, but not complete responsibility. A general description of the types of regulation and supervision involved in the U.S. banking system is given by the Federal Reserve:[44]
The Board also plays a major role in the supervision and regulation of the U.S. banking system. It has supervisory responsibilities for state-chartered banks[45] that are members of the Federal Reserve System, bank holding companies(companies that control banks), the foreign activities of member banks, the U.S. activities of foreign banks, and Edge Act and “agreement corporations” (limited-purpose institutions that engage in a foreign banking business). The Board and, under delegated authority, the Federal Reserve Banks, supervise approximately 900 state member banks and 5,000 bank holding companies. Other federal agencies also serve as the primary federal supervisors of commercial banks; the Office of the Comptroller of the Currency supervises national banks, and the Federal Deposit Insurance Corporation supervises state banks that are not members of the Federal Reserve System.
Some regulations issued by the Board apply to the entire banking industry, whereas others apply only to member banks, that is, state banks that have chosen to join the Federal Reserve System and national banks, which by law must be members of the System. The Board also issues regulations to carry out major federal laws governing consumer credit protection, such as the Truth in Lending, Equal Credit Opportunity, and Home Mortgage Disclosure Acts. Many of these consumer protection regulations apply to various lenders outside the banking industry as well as to banks.
The Board has regular contact with members of the President’s Council of Economic Advisers and other key economic officials. The Chair also meets from time to time with the President of the United States and has regular meetings with the Secretary of the Treasury. The Chair has formal responsibilities in the international arena as well.
Regulatory and oversight responsibilities
The board of directors of each Federal Reserve Bank District also has regulatory and supervisory responsibilities. If the board of directors of a district bank has judged that a member bank is performing or behaving poorly, it will report this to the board of governors. This policy is described in law:
Each Federal reserve bank shall keep itself informed of the general character and amount of the loans and investments of its member banks with a view to ascertaining whether undue use is being made of bank credit for the speculative carrying of or trading in securities, real estate, or commodities, or for any other purpose inconsistent with the maintenance of sound credit conditions; and, in determining whether to grant or refuse advances, rediscounts, or other credit accommodations, the Federal reserve bank shall give consideration to such information. The chairman of the Federal reserve bank shall report to the Board of Governors of the Federal Reserve System any such undue use of bank credit by any member bank, together with his recommendation. Whenever, in the judgment of the Board of Governors of the Federal Reserve System, any member bank is making such undue use of bank credit, the Board may, in its discretion, after reasonable notice and an opportunity for a hearing, suspend such bank from the use of the credit facilities of the Federal Reserve System and may terminate such suspension or may renew it from time to time.[46]
National payments system
The Federal Reserve plays a role in the U.S. payments system. The twelve Federal Reserve Banks provide banking services to depository institutions and to the federal government. For depository institutions, they maintain accounts and provide various payment services, including collecting checks, electronically transferring funds, and distributing and receiving currency and coin. For the federal government, the Reserve Banks act as fiscal agents, paying Treasury checks; processing electronic payments; and issuing, transferring, and redeeming U.S. government securities.[47]
In the Depository Institutions Deregulation and Monetary Control Act of 1980, Congress reaffirmed that the Federal Reserve should promote an efficient nationwide payments system. The act subjects all depository institutions, not just member commercial banks, to reserve requirements and grants them equal access to Reserve Bank payment services. The Federal Reserve plays a role in the nation’s retail and wholesale payments systems by providing financial services to depository institutions. Retail payments are generally for relatively small-dollar amounts and often involve a depository institution’s retail clients—individuals and smaller businesses. The Reserve Banks’ retail services include distributing currency and coin, collecting checks, electronically transferring funds through FedACH (the Federal Reserve’s automated clearing house system), and beginning in 2023, facilitating instant payments using the FedNow service. By contrast, wholesale payments are generally for large-dollar amounts and often involve a depository institution’s large corporate customers or counterparties, including other financial institutions. The Reserve Banks’ wholesale services include electronically transferring funds through the Fedwire Funds Service and transferring securities issued by the U.S. government, its agencies, and certain other entities through the Fedwire Securities Service.
ooOOoo
There is more in that article including Structure, Board of Governors, the Federal Reserve Banks (there are 12), and more subjects. So if you want to read these then, please, go here.
And I am bound to say that I have recently finished reading The FINANCIAL SYSTEM LIMIT by David Kauders. The sub-title of the book is The World’s Real Debt Burden. If you are at all interested in the subject then read the book.
It is seemingly a simple question but in practice not so.
Listening to danger or telling others of a danger is a very ancient practice. For it is better to share a potential danger than not to. It was easy to look this up:
Modern sense of “risk, peril, exposure to injury, loss, pain, etc.” (from being in the control of someone or something else) evolved first in French and was in English by late 14c. For this, Old English had pleoh; in early Middle English this sense is found in peril. For sound changes, compare dungeon, which is from the same source.
Thus a post on The Conversation that was about happiness caught my eye.
I am delighted to share it with you.
ooOOoo
Philly psychology students map out local landmarks and hidden destinations where they feel happiest
I am the director of the Happiness Lab at Drexel University, where I also teach a course on happiness. The Happiness Lab is a think tank that investigates the ingredients that contribute to people’s happiness.
Often, my students ask me something along the lines of, “Dr. Z, tell us one thing that will make us happier.”
As a first step, I advise them to spend more time outside.
Achieving lasting and sustainable happiness is more complicated. Research on the happiest countries in the world and the places where people live the longest, known as Blue Zones, shows a common thread: Residents feel they are part of something larger than themselves, such as a community or a city.
So if you’re living in a metropolis like Philadelphia, where, incidentally, the iconic pursuit of happiness charge was ratified in the Declaration of Independence, I believe urban citizenship – that is, forming an identity with your urban surroundings – should also be on your list.
He believed that this relationship was crucial to our psychological well-being.
More recent research in neuroscience and functional imaging has revealed a vast, intricate and complex neurological architecture underlying our psychological perception of a place. Numerous neurological pathways and functional loops transform a complex neuropsychological process into a simple realization: I am happy here!
For example, a happy place should feel safe.
The country of Croatia, a tourist haven for its beauty and culinary delights, is also one of the top 20 safest countries globally, according to the 2025 Global Peace Index.
The U.S. ranks 128th.
The availability of good food and drink can also be a significant factor in creating a happy place.
However, according to American psychologist Abraham Maslow, a pioneer in the field of positive psychology, the opportunity for social connectivity, experiencing something meaningful and having a sense of belonging is more crucial.
Furthermore, research on happy places suggests that they are beautiful. It should not come as a surprise that the happiest places in the world are also drop-dead gorgeous, such as the Indian Ocean archipelago of Mauritius, which is the happiest country in Africa, according to the 2025 World Happiness Report from the University of Oxford and others.
Happy places often provide access to nature and promote active lifestyles, which can help relieve stress. The residents of the island of Ikaria in Greece, for example, one of the original Blue Zones, demonstrate high levels of physical activity and social interaction.
I asked my undergraduate psychology students at Drexel, many of whom come from other cities, states and countries, to pick one place in Philadelphia where they feel happy.
From the 243 student responses, the Happiness Lab curated 28 Philly happy places, based on how frequently the places were endorsed and their accessibility.
Philadelphia’s founder, William Penn, would likely approve that Rittenhouse Square Park and three other public squares – Logan, Franklin and Washington – were included. These squares were vital to Penn’s vision of landscaped public parks to promote the health of the mind and body by providing “salubrious spaces similar to the private garden.” They are beautiful and approachable, serving as “places to rest, take a pause, work, or read a book,” one student told us.
My students said these are small, unexpected spots that provide an excellent opportunity for a quiet, peaceful break, to be present, whether enjoyed alone or with a friend. I checked them out and I agree.
The students also mentioned places I had never heard of even though I’ve lived in the city for over 30 years.
The “cat park” at 526 N. Natrona St. in Mantua is a quiet little park with an eclectic personality and lots of friendly cats.
Mango Mango Dessert at 1013 Cherry St. in Chinatown, which is a frequently endorsed happiness spot among the students because of its “bustling streets, lively atmosphere and delicious food,” is a perfect pit stop for mango lovers. And Maison Sweet, at 2930 Chestnut St. in University City, is a casual bakery and cafe “where you may end up staying longer than planned,” one student shared.
I find that Philly’s happy places, as seen through the eyes of college students, tend to offer a space for residents to take time out from their day to pause, reset, relax and feel more connected and in touch with the city.
Happiness principals are universal, yet our own journeys are very personal. Philadelphians across the city may have their own list of happy places. There are really no right or wrong answers. If you don’t have a personal happy space, just start exploring and you may be surprised what you will find, including a new sense of happiness.
See the full Philly Happiness Map list here, and visit the exhibit at the W.W. Hagerty Library at Drexel University to learn more.
Everything in space – from the Earth and Sun to black holes – accounts for just 15% of all matter in the universe. The rest of the cosmos seems to be made of an invisible material astronomers call dark matter.
Astronomers know dark matter exists because its gravity affects other things, such as light. But understanding what dark matter is remains an active area of research.
With the release of its first images this month, the Vera C. Rubin Observatory has begun a 10-year mission to help unravel the mystery of dark matter. The observatory will continue the legacy of its namesake, a trailblazing astronomer who advanced our understanding of the other 85% of the universe.
As a historian of astronomy, I’ve studied how Vera Rubin’s contributions have shaped astrophysics. The observatory’s name is fitting, given that its data will soon provide scientists with a way to build on her work and shed more light on dark matter.
Wide view of the universe
From its vantage point in the Chilean Andes mountains, the Rubin Observatory will document everything visible in the southern sky. Every three nights, the observatory and its 3,200 megapixel camera will make a record of the sky.
This camera, about the size of a small car, is the largest digital camera ever built. Images will capture an area of the sky roughly 45 times the size of the full Moon. With a big camera with a wide field of view, Rubin will produce about five petabytes of data every year. That’s roughly 5,000 years’ worth of MP3 songs.
After weeks, months and years of observations, astronomers will have a time-lapse record revealing anything that explodes, flashes or moves – such as supernovas, variable stars or asteroids. They’ll also have the largest survey of galaxies ever made. These galactic views are key to investigating dark matter.
Galaxies are the key
Deep field images from the Hubble Space Telescope, the James Webb Space Telescope and others have visually revealed the abundance of galaxies in the universe. These images are taken with a long exposure time to collect the most light, so that even very faint objects show up.
Researchers now know that those galaxies aren’t randomly distributed. Gravity and dark matter pull and guide them into a structure that resembles a spider’s web or a tub of bubbles. The Rubin Observatory will expand upon these previous galactic surveys, increasing the precision of the data and capturing billions more galaxies.
In addition to helping structure galaxies throughout the universe, dark matter also distorts the appearance of galaxies through an effect referred to as gravitational lensing.
Light travels through space in a straight line − unless it gets close to something massive. Gravity bends light’s path, which distorts the way we see it. This gravitational lensing effect provides clues that could help astronomers locate dark matter. The stronger the gravity, the bigger the bend in light’s path.
The white galaxies seen here are bound in a cluster. The gravity from the galaxies and the dark matter bends the light from the more distant galaxies, creating contorted and magnified images of them. NASA, ESA, CSA and STScI
Discovering dark matter
For centuries, astronomers tracked and measured the motion of planets in the solar system. They found that all the planets followed the path predicted by Newton’s laws of motion, except for Uranus. Astronomers and mathematicians reasoned that if Newton’s laws are true, there must be some missing matter – another massive object – out there tugging on Uranus. From this hypothesis, they discovered Neptune, confirming Newton’s laws.
With the ability to see fainter objects in the 1930s, astronomers began tracking the motions of galaxies.
California Institute of Technology astronomer Fritz Zwicky coined the term dark matter in 1933, after observing galaxies in the Coma Cluster. He calculated the mass of the galaxies based on their speeds, which did not match their mass based on the number of stars he observed.
He suspected that the cluster could contain an invisible, missing matter that kept the galaxies from flying apart. But for several decades he lacked enough observational evidence to support his theory.
In 1965, Vera Rubin became the first women hired onto the scientific staff at the Carnegie Institution’s Department of Terrestrial Magnetism in Washington, D.C.
She worked with Kent Ford, who had built an extremely sensitive spectrograph and was looking to apply it to a scientific research project. Rubin and Ford used the spectrograph to measure how fast stars orbit around the center of their galaxies.
In the solar system, where most of the mass is within the Sun at the center, the closest planet, Mercury, moves faster than the farthest planet, Neptune.
“We had expected that as stars got farther and farther from the center of their galaxy, they would orbit slower and slower,” Rubin said in 1992.
“And that really leads to only two possibilities,” Rubin explained. “Either Newton’s laws don’t hold, and physicists and astronomers are woefully afraid of that … (or) stars are responding to the gravitational field of matter which we don’t see.”
Data piled up as Rubin created plot after plot. Her colleagues didn’t doubt her observations, but the interpretation remained a debate. Many people were reluctant to accept that dark matter was necessary to account for the findings in Rubin’s data.
Rubin continued studying galaxies, measuring how fast stars moved within them. She wasn’t interested in investigating dark matter itself, but she carried on with documenting its effects on the motion of galaxies.
A U.S quarter honors Vera Rubin’s contributions to our understanding of dark matter. United States Mint, CC BY
Vera Rubin’s legacy
Today, more people are aware of Rubin’s observations and contributions to our understanding of dark matter. In 2019, a congressional bill was introduced to rename the former Large Synoptic Survey Telescope to the Vera C. Rubin Observatory. In June 2025, the U.S. Mint released a quarter featuring Vera Rubin.
Rubin continued to accumulate data about the motions of galaxies throughout her career. Others picked up where she left off and have helped advance dark matter research over the past 50 years.
In the 1970s, physicist James Peebles and astronomers Jeremiah Ostriker and Amos Yahil created computer simulations of individual galaxies. They concluded, similarly to Zwicky, that there was not enough visible matter in galaxies to keep them from flying apart.
They suggested that whatever dark matter is − be it cold stars, black holes or some unknown particle − there could be as much as 10 times the amount of dark matter than ordinary matter in galaxies.
Throughout its 10-year run, the Rubin Observatory should give even more researchers the opportunity to add to our understanding of dark matter.