Insights from the Science & AI Research Foundation. Exploring how intelligence, human and machine, together advance the frontiers of research.
🌐 Visit Blog.sair.foundation
🌐 Blog.sair.foundation besuchen
Write rieview✍️ Rezension schreiben✍️ Get Badge!🏷️ Abzeichen holen!🏷️ Edit entry⚙️ Eintrag bearbeiten⚙️ News📰 Neuigkeiten📰
Tags: advance exploring foundation frontiers insights intelligence machine research science together
60313 Frankfurt am Main, DE Germany, EU Europe, latitude: 50.1169, longitude: 8.6837
History will likely remember this era as the moment humanity first encountered a non-biological intelligence of its own making. But the bridge between today’s large language models and the Artificial General Intelligence (AGI) of the future requires a fundamental shift in strategy.
The current trajectory has proven one thing conclusively: industrial-scale execution works. Feeding massive data into expanding architectures has yielded incredible results, but we are now engineering ahead of our understanding. We have effectively built brighter, yet inherently limited, incandescent bulbs before grasping the underlying material science. Current models are triumphs of empirical engineering. Their capabilities scale with power input, yet they lack the efficiency and reasoning required for true General Intelligence.
The challenge of the coming years is not to abandon scaling, but to expand its geometry. We must move beyond the single axis of parameters to build “brighter filaments” through broad scientific synthesis. This demands that we scale the connections themselves—deploying our infrastructure to bridge AI with mathematics and the full spectrum of natural disciplines. Just as the transition to modern lighting required a leap from brute-force heating to material science, reaching AGI requires us to scale our scientific horizon, breaking current bottlenecks by discovering the laws of intelligence that lie at the intersection of these fields.
We stand at a threshold where architectural scaling offers diminishing returns. To advance, the field must transcend the simple expansion of datasets and focus on establishing the foundational first principles of intelligence. We must treat the discovery of these governing dynamics not as a philosophical debate, but as an optimization problem as grand and resource-intensive as model training. This broadening is necessary because the industry currently operates with an incomplete scientific framework. We currently lack a unified mathematical framework that can connect the empirical performance of these systems with a fundamental understanding of how intelligence emerges from them. As SAIR co-founder and Fields Medalist Terence Tao presents the challenge: "In physics, we have made significant progress in understanding how macroscopic laws emerge from microscopic first principles, for instance deriving the laws of fluids or thermodynamics from the interactions of individual particles. A major challenge for the twenty-first century will be to similarly understand how emergent machine learning laws, such as those relating to scaling or transfer learning, can emerge from the mechanics of training a neural network or transformer, and to locate useful mathematical models of real-world data that can lead to such understanding."
With these first principles, our roadmap evolves from relying solely on empirical scaling laws to utilizing theoretical scaling laws. While current trends act as remarkably predictive heuristics, they describe what happens rather than why. By scaling the science, we turn these scaling curves from descriptive observations into prescriptive engineering controls. This ensures that rather than optimizing against an invisible ceiling, we can continuously architect the structure of the system to accommodate indefinite growth.
Transitioning to a fundamental science of intelligence requires a new institutional structure. Nature ignores administrative boundaries; future breakthroughs demand the unification of currently fragmented disciplines—from theoretical physics and neuroscience to mathematics and high-performance computing. We are building the infrastructure to operationalize this multi-dimensional scaling. SAIR functions as a high-bandwidth platform designed to maximize connectivity, transforming scientific inquiry from isolated silos into a unified, reactive network. In this system, knowledge does not just accumulate; it propagates. A theoretical breakthrough in one domain instantly ripples across the entire structure, recalibrating assumptions and triggering new lines of inquiry in adjacent fields. By scaling these collaborative interactions, SAIR creates an engine of discovery capable of expanding the AGI frontier in every direction simultaneously.
We pair this collaborative framework with abundant computational resources. SAIR co-founder and Dean of UCLA Physical Sciences Miguel A. García-Garibay articulates our vision: “SAIR’s mission includes the identification, categorization, and description of the fundamental principles behind AGI, the internal structures that make it excel in all areas of human-like cognition, and their deployment in pursuit of scientific and technological advancement.” By applying the scale of modern AI to the scientific process itself, we enable researchers to rapidly prototype and verify theoretical models across these disciplines. This compresses the latency between intuition and evidence, turning the search for AGI from a guessing game into a systematic engine of discovery. Supported by Nobel, Fields, and Turing Laureates, we operate with the certainty of David Hilbert: We must know, and we will know.
14.1.2026 23:25Scaling the Science of IntelligenceIf we were to leave a message for a generation living a thousand years from now, what would be worth saying?
Bertrand Russell, speaking at the height of the Cold War, offered an answer that resonates more urgently today than it did in 1959. He urged us to look solely at the facts—to ask "what is the truth that the facts bear out"—and to never be blinded by what we wish to believe.
For centuries, this pursuit was the engine of human liberation. We were once a tribe huddled around a fire, mistaking our small circle of light for the entirety of existence. Science expanded that circle, freeing us from the darkness of ignorance that once confined us.
But today, we face a new kind of darkness. We are no longer just struggling to find facts; we are building machines capable of synthesizing them. We stand at the precipice of a new era, but there is a critical disconnect in our trajectory: AI development is currently blind.
It is an engine of unparalleled horsepower, but it moves without a map. To ensure that AI truly advances the progress of humankind, we cannot leave it to technologists alone. We must reconstruct the infrastructure of discovery by uniting the scientific community to provide the one thing silicon cannot: Vision.
AI, in its current form, is a statistical miracle but a scientific novice. It optimizes for plausibility—what looks like an answer—rather than truth.
When AI enters the realm of science without deep domain expertise, this blindness becomes dangerous. As our co-founder and Fields Medalist Terence Tao has warned, the existential risk of AI is not necessarily a cinematic apocalypse; it is a subtle corruption of trust. Because AI lowers the cost of generating "reality" to near zero, it threatens to flood our scientific ecosystem with plausible fabrications.
If a language model hallucinates a poem, it is a quirk. If it hallucinates a chemical compound or a mathematical proof, it is a contagion. Without the rigorous "vision" of expert knowledge to distinguish signal from noise, we risk optimizing for the wrong objectives—chasing dead-end theories or deploying risky technologies simply because the model "sounded" convincing.
Speed is not velocity. Speed is distance over time; velocity is speed plus direction. AI provides the speed. Science must provide the direction.
However, just as we need human vision the most, the mechanisms that create it are eroding. The rot goes deeper than budget cuts; it is a crisis of continuity.
When a field loses support, we don't just lose time; we lose the lineage of expertise. We are already seeing the symptoms of this "institutional amnesia" with the Apollo program. Because we allowed that human ecosystem to wither, we lost the ability to fully contextualize the achievement.
We are now risking this same amnesia on a global scale. AI is beginning to fracture the lineage of occupation. By automating the entry-level work—the "sandbox" where novices gain experience by making necessary mistakes—we are removing the training ground for human intuition.
The human scientist is the ultimate error-correction mechanism. If we eliminate the space where humans fail, learn, and develop deep expertise, we ensure that once the current generation of leaders retires, there will be no one left who understands the fundamental principles behind the machine. We create a future where we are passengers in a vehicle that no one knows how to drive.
To fix the blindness of machines and secure the future of human expertise, we need a new architecture of guidance.
We must build a future where the world’s top minds do more than just advise; they must embed their "know-how" into the AI itself. We need to define the "ground truth" across scientific fields, designing objective functions that force machines to optimize for reality, not just likelihood.
Crucially, this architecture must also evolve the role of the scientist to prevent the erosion of expertise. Rather than being replaced by the machine, the scientist must become the Architect of Verification. By shifting the human focus from the execution of every granular detail to high-level validation and logical architecture, we create a new training ground for intuition. We keep the human in the loop, ensuring the lineage of knowledge remains unbroken.
SAIR stands as the bridge between the rigid integrity of Science and the fluid brilliance of AI. We are building the eyes for the giant. Because in the pursuit of discovery, speed is meaningless if we are moving in the dark.
1.1.2026 18:10SAIR's Perspective on Science and AIIn the latest episode of the SAIR podcast, we were treated to a unique dynamic: a conversation between Riley Tao and Riley’s father, the renowned mathematician and Fields Medalist, Terence Tao.
While the world speculates on when AI might replace human researchers, Professor Tao offers a more grounded and optimistic perspective. For him, AI is a powerful engine that requires a specific chassis to be safe.
Here are the key takeaways from their conversation regarding the evolving partnership between mathematicians and AI.
When asked how he currently utilizes AI, Tao likens current AI technology to a jet engine: extremely powerful and capable of high speeds, but dangerous if you simply strap it to your back.
"We don't strap jet engines on people and fly around in jetpacks (for commercial transportation)," Tao explains. "But we do have very safe, reliable planes that use jet engines to cross the Atlantic."
We are currently in a transition period. The raw engine (the Large Language Model) is powerful but unreliable. The goal of current research is to build the "plane" around it — the verification tools and workflows that allow us to harness that power safely.
One of the most compelling moments of the interview was Tao’s description of the reliability problem. In science and math, we crave the "clean water" of truth and rigorous results.
Tao describes the traditional scientific process as a tap that produces high-quality drinkable water, but at a very slow trickle. AI, conversely, is a "firehose of high volume, high velocity sewage water."
It produces massive amounts of data, code, and text, but it is filled with "crud" and hallucinations. The challenge and the opportunity lies in building a filter.
"In math, I think we really have a chance to make this happen because we understand verification very, very well," Tao says. Unlike other sciences that rely on clinical trials or physical experiments, mathematics has formal proof assistants (like Lean). These act as compilers that can grade the output of an AI with 100% confidence.
If we can successfully attach the "sewage firehose" of AI to the "filter" of formal verification, we can achieve something unprecedented: high-volume, drinkable research.
So, what does the workflow look like when the water is filtered? Tao references Ricardo’s Law of Comparative Advantage from economics. Even if an AI eventually becomes better than a human at everything (which it isn't yet), it is most efficient to assign tasks based on relative strength.
Tao highlighted a recent collaboration with Google DeepMind regarding "Nikodym sets." The AI was able to construct clever, specific examples that optimized a certain score. Tao couldn't scan the millions of possibilities the AI did, but once the AI handed him the examples, he was able to read the code, understand the logic, and write a human-generated proof that generalized the concept for all sizes.
Finally, Riley and Terence discussed the state of AI evaluation. While benchmarks (like the International Math Olympiad) were useful early targets, Tao believes we are reaching a saturation point where models might be "teaching to the test."
The next frontier isn't a higher test score, but usability.
Drawing a comparison to Steve Jobs, Tao noted that the next great leap in AI for science won't necessarily be raw power, but the "last mile" of software development — making these tools intuitive and practical for the working scientist.
The future of mathematics isn't about AI solving everything while humans watch. It is about a symbiotic relationship where AI acts as the explorer and synthesizer, and humans act as the verifiers and creative architects.
As Tao concludes, "We need all the help we can get. I think it's a great future for math and science."
Listen to the full conversation on the SAIR Podcast.
19.12.2025 22:08On the SAIR: Episode 2 — Turning AI's Firehose Into Usable Science with Terence Tao & Riley TaoWhen the White House launched the Genesis Mission, it finally admitted what every top scientist has been saying privately for years:
The system is broken — not the science.
For decades, the U.S. has been running a 21st-century innovation race on 20th-century infrastructure. AI is exploding. Discovery is accelerating. Yet our research ecosystem remains stuck in slow committees, antiquated funding cycles, and bureaucratic structures that penalize ambition and delay progress.
Genesis is Washington’s way of saying:
“We can’t afford to fall behind.”
At SAIR, we agree.
Genesis acknowledges a simple truth:
America doesn’t lack genius. It lacks structure.
Fields Medalists. Nobel laureates. Turing winners.
They’re all hitting the same wall — a system designed to minimize risk, not maximize discovery.
Meanwhile AI is transforming the entire landscape:
• Modeling faster than labs can validate
• Generating insights faster than institutions can approve
• Opening new fields before old ones have funding
This mismatch is dangerous. It slows breakthroughs and amplifies noise.
SAIR is a scientist-led attempt to rebuild what modern discovery actually needs.
Co-founded by Prof. Terence Tao, SAIR unites a global network of the world’s top researchers — and gives them the structure the system won’t.
While Genesis sketches the aspiration, SAIR is already delivering the architecture:
No 18-month grant cycles.
No committees killing breakthrough ideas.
Just direct support to the labs and people who push the edge of mathematics, physics, chemistry, biology, and AI.
Not hype.
Not hallucination.
But real pipelines for:
All designed with — and for — the scientists doing the work.
Genesis talks about coordination.
SAIR is already wired into:
• Fortune 500 R&D teams
• major labs and universities
• deep-tech founders
• AI research groups
• federal and enterprise partners
Breakthroughs shouldn’t die in PDFs.
They should move.
The next Einstein won’t survive in a system optimised for incremental progress.
Genesis recognises this.
SAIR is fixing it.
Every Nobel, Fields, and Turing winner in our network says the same thing:
This decade will decide who leads the next century of science.
AI gives us leverage unlike anything in human history — but only if the infrastructure around it works.
Genesis is the policy signal.
SAIR is the execution layer.
We welcome the Genesis Mission.
It signals that the U.S. is finally taking the crisis seriously.
But we can’t afford another decade of reports, task forces, and cautious funding models.
We need:
• scientist-led decision-making
• AI-native research ecosystems
• fast capital
• robust industry pipelines
• and global coordination at scale
This is the architecture SAIR is building.
The future will be built by the people who understand the frontier and the structures that empower them.
SAIR is here to build those structures.
And this time, we’re not waiting.
8.12.2025 18:10The Genesis Mission Is a Wake-Up Call — and a Historic Opportunity to Rebuild Humanity’s Contract with Science and AIIn October, we recorded the very first episode of On The SAIR, a new podcast from the Science & AI Research Foundation — where we explore how artificial intelligence can responsibly accelerate discovery across every field of science.
For our debut episode, host Peter sat down with Professor Terence Tao (UCLA) and Chuck Ng (Co-Founder, World Leading Scientists Institute) to discuss what AI means for the future of research — from mathematics to biology, from education to ethics.
Here are a few highlights that stood out:
Terence reminded us that the real promise of AI isn’t about replacing human scientists — it’s about removing the repetitive and time-consuming parts of research. When AI handles the “drudge work,” people can focus on creativity, intuition, and breakthrough thinking.
No scientist can keep up with the vastness of human knowledge. Tools that help organize, summarize, and connect what’s already known will be transformative. In science, that’s half the battle.
Chuck emphasized that mathematics offers a natural starting point for AI-for-science — it’s structured, benchmarkable, and verifiable. Once we understand the patterns of responsible use, those methods can expand across disciplines.
Both speakers agreed that AI should be integrated into learning, not banned. Students can use AI tools — but they must show prompts, reasoning, and process. As Terence put it, “You can’t just give the answer — you have to show your work.”
Projects, hands-on applications, and balanced policies will shape a new generation of scientific thinkers.
The real risks are not about AI taking over humanity like in the Terminator movies — often it's more about marketing. The real challenges lie in authenticity and trust, from misinformation like deepfakes. Transparency, cultural norms, and sound policy will matter far more than fear.
We should never lose the ability to verify AI outputs. The rule of thumb is only using AI as far as you can trust it outputs. In mathematics we have the capability to verify outputs automatically by reliable tools; in other sciences, lab replication and simulation play that role.
AI and science have always shared a common goal — to understand the world more deeply. What’s changing is how we collaborate with intelligence itself.
We’re just getting started.
Watch now: https://youtu.be/Rm1mHfwlS2w?si=NQ-zNEl84iMlrXqo
Subscribe On The SAIR for upcoming conversations with the thinkers shaping the next era of scientific discovery.
13.11.2025 18:10On the SAIR: Episode 1 — AI × Science with Terence Tao & Chuck NgExploring the why, the how, and the vast opportunity ahead
The Science & AI Research Foundation (SAIR) is a US-based non-profit institution headquartered in the US that unites the world’s greatest minds, from leading scientists and universities to frontier enterprises — to advance discovery in the age of intelligent systems.
Led by pioneering figures such as Professor Terence Tao, SAIR funds frontier research, builds open scientific infrastructure, and convenes global conferences and awards programs celebrating breakthroughs at the intersection of science and AI.
At SAIR, our vision is simple yet profound: to empower global science in the age of intelligent discovery. We believe that the union of human intellect and machine intelligence has the potential to unlock new laws of nature, accelerate discovery, and expand the boundaries of human understanding.
But why does an institution like SAIR need to exist — today, now — and what gap are we filling? In this first blog post, we dive into the “why”, draw from how the research and AI ecosystem is evolving, and set out what we’ll bring to this blog series going forward.
Science has always advanced via curiosity, experiment, iteration, collaboration. Yet for many researchers and institutions around the world the path is increasingly constrained by:
Against this backdrop, the role of AI in science is shifting rapidly. For example, researchers at Google Research introduced a “AI co-scientist” agent aimed at collaborating with human scientists to generate hypotheses and accelerate discovery. Meanwhile, recent reviews of “AI for Science” show that although the potential is real, broader adoption remains hindered by methodological and ecosystem gaps.
So the question becomes: if AI is going to transform science, are we ready for the cultural, structural, global shift required? That’s where SAIR steps in.
SAIR exists at the intersection of three converging opportunities:
a) AI + science: We recognize that AI is not just a tool for incremental improvement — it can change how we do science: from hypothesis generation, to experiment design, to data interpretation. But this requires new infrastructure, new funding modes, and new open frameworks.
b) Global inclusion and open science: Too often the frontier of science is dominated by a narrow set of institutions or geographies. We believe an open global ecosystem — where researchers everywhere can participate, contribute, and benefit — creates exponential advantage.
c) Ethical and collaborative approach: As we apply AI in science, issues of transparency, data governance, reproducibility and intellectual freedom become critical. SAIR is built around those values: open datasets, transparent standards, authors retaining ownership, and a community ethos rather than a closed platform.
In short: SAIR isn’t just funding more science; we are enabling revolutionary science — science done with AI, globally inclusive, open, collaborative, responsible. Researchers don’t just receive support — they also contribute knowledge and resources — anchoring a virtuous cycle.
In this weekly blog series, we’ll explore:
Our goal is not simply to post announcements. We want this blog to be a platform – for ideas, debate, showcase of frontier work, and a mirror of our ecosystem in action.
Whether you’re a physicist exploring new laws of nature, a biologist mapping complex systems, a mathematician uncovering hidden structures, or a social scientist modeling societies — the frontier is shifting. The opportunity is vast. The infrastructure, partnership and ecosystem must scale accordingly.
If you believe — as we do — that science should be open, shared, accelerated by the best tools, and global in scope, then we invite you to join us. Watch this space for upcoming calls, workshops and collaborative opportunities from SAIR.
Thank you for reading our first post. We look forward to sharing this journey with you — and building, together, the next wave of scientific discovery.
— The SAIR Team
4.11.2025 01:00Welcome to the SAIR Blog