Mathematical Foundations for a Temporal Process Theory
A. Sabine · Version 6.0 · March 2026
You don't experience time as a smooth flow—you experience distinct "nows," each one feeling complete before giving way to the next. William James called these "perchings" and "flights" of consciousness.
CRR proposes that this pattern—build up, break, rebuild—is a universal signature of how bounded systems navigate time. It is grounded in process philosophy (Whitehead, 1929): reality is not made of things that change, but of changes that occasionally cohere into things.
Think of coherence as "accumulating evidence." As you read this sentence, your brain integrates information—words build on words, meaning accumulates. Mathematically, coherence is the integral of a system's activity over time: everything it has done since its last transition, added up.
When accumulated coherence saturates the system's capacity, transformation occurs. This is rupture—the discrete, instantaneous moment when the system must reorganise. The key insight: rupture happens when the product of accumulated evidence ($C$) and the system's characteristic variance ($\Omega$) reaches exactly 1. This is the Cramér-Rao bound from statistics, the Heisenberg uncertainty principle from physics, and the Gabor limit from signal processing—all the same equation.
After rupture, the system rebuilds using its history—but not all history equally. Moments of high coherence contribute more; moments of low coherence fade away. This is why significant experiences shape you more than forgettable ones, regardless of when they occurred. Memory is weighted by significance, not recency.
The framework has one central parameter: $\Omega = \sigma^2$ — the system's characteristic variance (equivalently, the inverse of precision: $\Omega = 1/\pi$). Think of it as a dial between "rigid" and "flexible":
From the axioms (see Axioms tab), CRR derives that the coefficient of variation (CV) of inter-event intervals is:
For the two fundamental symmetry classes on $S^1$:
The ratio between them is exactly 2. These predictions have been tested across 100+ systems in 30+ domains. Systems that deviate tell you something specific: CV below prediction indicates active regulation; CV above prediction indicates asymmetric bistability.
If CRR is correct, it provides: a universal language for temporal dynamics across scales, parameter-free testable predictions, a bridge between information geometry and process philosophy, and a formal temporal completion of the Free Energy Principle. If CRR is wrong, the CV predictions will fail—and the deviations will be informative about what the true temporal grammar must look like.
CRR rests on a minimal set of axioms drawn from information geometry, thermodynamics, and process philosophy. Each axiom connects to established results in physics and mathematics. Together they yield parameter-free predictions testable across every domain where systems persist through change.
Any bounded system that maintains itself against dissipation does so by accumulating coherence—temporal evidence about its environment. In the language of the Free Energy Principle, this is the progressive reduction of variational free energy: as VFE decreases, $C$ increases. The system's generative model becomes a better fit to its environment with each passing moment.
The coherence integral $C$ is formally identified with accumulated Fisher information $I(\theta)$ about the system's generative model parameters $\theta$. Fisher information measures the curvature of the log-likelihood: how sharply the data distinguish between nearby hypotheses. It is the unique Riemannian metric on statistical manifolds (Čencov's theorem), meaning any theory of inference that respects sufficient statistics must use it.
The Cramér-Rao inequality then states a fundamental limit:
No unbiased estimator can have variance smaller than the inverse of the accumulated Fisher information. This is not a modelling assumption—it is a theorem of mathematical statistics, proven independently by Cramér (1946) and Rao (1945). Ito & Dechant (2020) extended this to stochastic thermodynamics, showing that the Cramér-Rao bound governs the trade-off between current fluctuations and entropy production in irreversible processes far from equilibrium.
CRR's contribution: the bound is not merely approached but saturated. At the moment of rupture, $C \cdot \Omega = 1$ exactly. The system has extracted the maximum information its current configuration permits.
No system can build coherence without limit. The Cramér-Rao bound demands a boundary where accumulated evidence meets system variance. CRR identifies this boundary with the Dirac delta—an instantaneous, scale-invariant moment of transformation.
In the FEP, a Markov blanket is a spatial boundary that renders internal states conditionally independent of external states. CRR proposes that the Dirac delta $\delta(\text{now})$ serves as the temporal analogue: the boundary between past and future, between coherence and regeneration.
The delta has three properties that make it the unique candidate for a temporal boundary:
The Dirac delta distributes its unit mass across the boundary between inside (all past states—coherence accumulated within the blanket) and outside (all future states—regeneration beyond the blanket). The present moment is where inside becomes outside; where evidence becomes action; where the accumulated past becomes the reconstructed future.
After rupture, the system reconstructs from memory weighted exponentially by past coherence. $\Omega$ governs both the threshold for transformation and the depth of memory access—it is simultaneously the system's variance (in the FEP sense), its free energy limit, and its thermodynamic boundary.
$\Omega = \sigma^2$ is the system's characteristic variance—the inverse of precision ($\pi = 1/\Omega$). In thermodynamic terms, $\Omega$ sets the free energy scale: the amount of surprise (in nats) that the system can tolerate before its generative model must reorganise. This connects CRR to Jaynes' maximum entropy principle: a system with variance $\Omega$ has maximised its entropy subject to the constraint that it maintains coherence up to the threshold $1/\Omega$.
The regeneration weighting $\exp(C/\Omega)$ ensures that moments of high coherence contribute most strongly to reconstruction. This is the Boltzmann factor of statistical mechanics, with $C$ playing the role of energy and $\Omega$ playing the role of temperature. The most "energetic" (coherent) memories dominate the reconstruction, just as the most energetic microstates dominate thermodynamic averages.
This is the Cramér-Rao bound at saturation. It is simultaneously the Heisenberg uncertainty principle ($\Delta E \cdot \Delta t \geq \hbar/2$), the Gabor limit ($\Delta f \cdot \Delta t \geq 1/4\pi$), and the thermodynamic uncertainty relation. CRR claims these are not analogies—they are the same equation, expressing the same physical fact: a bounded system that has extracted maximum information from its current configuration must transform.
The product $C \cdot \Omega = 1$ holds regardless of what the system is, what it is made of, or at what scale it operates. This universality follows from the Cramér-Rao bound being a theorem of information geometry—it depends only on the structure of statistical inference, not on any particular physics. Wherever there is a system accumulating evidence about its environment with finite variance, $C \cdot \Omega = 1$ defines the moment of necessary transformation.
| Framework | Evidence | Variance | Bound | Citation |
|---|---|---|---|---|
| Statistics | Fisher information $I(\theta)$ | $\text{Var}(\hat{\theta}) = \sigma^2$ | $\sigma^2 \cdot I(\theta) \geq 1$ | Cramér (1946); Rao (1945) |
| Quantum mechanics | Energy $E$ | Time uncertainty $\Delta t$ | $\Delta E \cdot \Delta t \geq \hbar/2$ | Heisenberg (1927) |
| Signal processing | Bandwidth $\Delta f$ | Duration $\Delta t$ | $\Delta f \cdot \Delta t \geq 1/4\pi$ | Gabor (1946) |
| Thermodynamics | Current $J$ | Entropy production $\sigma$ | $\text{Var}(J) \cdot \sigma \geq 2\langle J \rangle^2$ | Ito & Dechant (2020) |
| Information geometry | Statistical distance $ds^2$ | Fisher-Rao metric $g$ | $ds^2 = g_{ij}\,d\theta^i d\theta^j$ | Čencov (1982); Amari & Nagaoka (2000) |
| CRR | Coherence $C$ | Variance $\Omega$ | $C \cdot \Omega = 1$ | Saturation of all the above |
Three things. First, saturation: the bound is not merely a lower limit but is reached at every rupture event. Second, symmetry classification: the geometric value of $\Omega$ is determined by the system's symmetry class ($\mathbb{Z}_2 \to 1/\pi$; $SO(2) \to 1/2\pi$). Third, regeneration dynamics: after the bound is saturated, $\exp(C/\Omega)$ governs how the system reconstructs from weighted memory. Ito & Dechant's thermodynamic uncertainty relation is the inequality; CRR is the equality, plus what happens next.
The Dirac delta distributes exactly one unit of mass across the rupture boundary. By symmetry between inside (coherence) and outside (regeneration), each side receives exactly one half. This fixes the standard deviation of the rupture threshold at $\sigma(C^*) = 1/2$, independent of $\Omega$.
At rupture, the threshold coherence $C^*$ satisfies $C^* \cdot \Omega = 1$, giving $E[C^*] = 1/\Omega$. The Dirac delta, as a temporal Markov blanket, partitions unit mass between past and future. By the symmetry of the boundary (there is no intrinsic asymmetry between what is accumulated and what is reconstructed), each partition receives $1/2$. Therefore:
For the two fundamental symmetry classes:
These predictions are parameter-free—no fitting, no calibration. They have been tested across 100+ systems in 30+ domains. See the full validation at CRR Benchmarks.
Because the Dirac delta has unit mass (this is definitional), because the rupture boundary separates exactly two domains (past and future), and because there is no symmetry-breaking mechanism to favour one side over the other. Any other partition would require an additional parameter—violating the parsimony that makes $C \cdot \Omega = 1$ a first principle rather than a model.
| Axiom | Statement | Formal Grounding |
|---|---|---|
| I. Coherence | All persisting systems accumulate evidence through time | Fisher information; VFE minimisation (Friston, 2010) |
| II. Rupture | A temporal boundary (Dirac delta) is required; it distributes unit mass between past and future | Temporal Markov blanket; distribution theory (Schwartz, 1950) |
| III. Regeneration | Systems persist through transformation, rebuilding from coherence-weighted memory | Boltzmann weighting; MaxEnt (Jaynes, 1957) |
| IV. Unity | $C \cdot \Omega = 1$ at the moment of transformation, at all scales | Cramér-Rao saturation; Heisenberg limit; Gabor limit |
From four axioms, one central result follows with no free parameters: CV $= \Omega/2$, from the equipartition of the Dirac delta's unit mass across the rupture boundary.
These axioms make CRR falsifiable: any system whose CV deviates from $\Omega/2$ either has a misidentified symmetry class, is actively regulated (CV < prediction), or has asymmetric state durations (CV > prediction). Deviations diagnose; they do not rescue.
The axioms (see Axioms tab) establish the ontological commitments. The following technical assumptions ensure mathematical well-posedness.
For each $x$ and $t$, the function $\tau \mapsto \mathcal{L}(x,\tau)$ is locally integrable on $[0,t]$.
$\mathcal{L}(x,\tau) \geq 0$ for all $x, \tau$. History accumulates; it does not spontaneously dissipate.
There exists $M > 0$ such that $\mathcal{L}(x,t) \leq M$ for all $x, t$. This ensures each coherence cycle has duration at least $1/(M\Omega)$, preventing infinitely many ruptures in finite time.
The coherence accumulation rate is a function $\mathcal{L} : \mathcal{X} \times [0,T] \to \mathbb{R}_{\geq 0}$ that assigns to each state-time pair $(x,\tau)$ a non-negative rate at which the system accumulates evidence about its environment.
Dimensions: $[\mathcal{L}] = [T^{-1}]$ (a rate, so that $\int \mathcal{L}\, d\tau$ is dimensionless).
Identification: $\mathcal{L}$ is formally identified with the rate of Fisher information accumulation—the curvature of the log-likelihood of the system's generative model, accumulated per unit time.
Let $t_j$ denote the most recent rupture time before $t$ (with $t_0 = 0$). The coherence at state $x$ and time $t$ is the accumulated evidence since the last rupture:
Properties: Dimensionless, monotone non-decreasing within each cycle, resets to 0 at each rupture.
The variance parameter $\Omega > 0$ is a positive dimensionless constant characterising the system's boundary permeability. It is identified with:
$\Omega$ determines the rupture threshold $C^* = 1/\Omega$ via Axiom IV ($C \cdot \Omega = 1$).
A rupture occurs at time $t_*$ when coherence reaches the threshold set by $\Omega$:
The rupture event is represented by a Dirac delta $\delta(t - t_*)$. Following rupture: $C(x, t_*^+) = 0$.
Equivalently, rupture occurs when $C(x,t) \cdot \Omega \geq 1$.
The regeneration operator reconstructs the system state by weighting historical field values $\varphi(x,\tau)$ exponentially by the coherence at each historical moment:
where $C(x,\tau)$ is the coherence value at moment $\tau$—how far into its cycle the system was at that historical moment—and $\Theta(t-\tau)$ is the Heaviside step function enforcing causality (only the accessible past contributes).
A high-coherence moment from 1000 cycles ago contributes with greater weight than a low-coherence moment from the most recent cycle. History is weighted by significance (coherence at the time), not by recency. This is consonant with Bergson's insight that memory is "the continuous presence of history," not retrieval from storage.
All temporal processes that undergo cyclic C → $\delta$ → R dynamics trace paths on the circle $S^1$. The symmetry class of the process determines the geometric value of $\Omega$:
For a system whose coherence cycle traverses phase $\varphi$ on $S^1$ before rupture:
In both cases: $C^* \cdot \Omega = 1$.
More generally, $\text{CV} = n/(4\pi)$ for $\mathbb{Z}_n$ symmetry classes, where $\mathbb{Z}_4 = SO(2)$ exactly. The symmetry classes are partitions of the circle: $\mathbb{Z}_2$ divides $S^1$ into two arcs of $\pi$; $\mathbb{Z}_4$ into four arcs of $\pi/2$; and $SO(2)$ treats the full $2\pi$ as a single cycle.
Under (A3), each coherence cycle has duration at least $1/(M\Omega)$. Hence the number of ruptures in any finite interval $[0,T]$ is at most $\lfloor TM\Omega \rfloor + 1 < \infty$.
The weight ratio between a rupture moment (when $C = C^* = 1/\Omega$) and a moment of zero coherence is:
For $\mathbb{Z}_2$ systems ($\Omega = 1/\pi$): contrast ratio $= e^{\pi^2} \approx 19{,}400$.
For $SO(2)$ systems ($\Omega = 1/2\pi$): contrast ratio $= e^{4\pi^2} \approx 1.4 \times 10^{17}$.
This extreme selectivity means regeneration is overwhelmingly dominated by moments near rupture—peak-coherence moments carry nearly all the weight, regardless of when they occurred.
| Quantity | Symbol | Dimensions | Identification |
|---|---|---|---|
| Accumulation rate | $\mathcal{L}$ | $[T^{-1}]$ | Fisher information rate |
| Coherence | $C$ | dimensionless | Accumulated Fisher information |
| Variance parameter | $\Omega$ | dimensionless | $\sigma^2 = 1/\pi = 1/\varphi$ |
| Rupture threshold | $C^* = 1/\Omega$ | dimensionless | Cramér-Rao saturation point |
| Historical field | $\varphi$ | $[F]$ | Reconstruction resource |
| Regeneration | $R$ | $[F] \cdot [T]$ | Coherence-weighted integral of history |
When a system regenerates after rupture, it doesn't treat all of history equally. Important moments (high coherence) contribute more; forgettable moments (low coherence) fade away.
Crucially, this weighting is based on how coherent each moment was in its own context—not on how recent it was. A significant experience from years ago can shape you as much as one from yesterday, if both reached high coherence.
The unnormalised weight at historical moment $\tau$ is simply the Boltzmann factor:
Here $C(x,\tau)$ is the coherence at moment $\tau$—how far through its cycle the system was at that time. Since $C$ cycles between $0$ and $C^* = 1/\Omega$ within each cycle, the weight $w(\tau)$ ranges from $\exp(0) = 1$ (at the start of each cycle) to $\exp(1/\Omega^2)$ (at each rupture moment).
$\Omega$ plays the role of "temperature" in the Boltzmann weighting $\exp(C/\Omega)$:
But unlike a recency-biased model, all high-coherence moments contribute with the same weight, whether ancient or recent. Memory is democratic across time, selective across significance.
Two systems with identical current coherence but different histories will in general have different regenerations. This follows directly from the regeneration integral $R = \int \varphi \cdot \exp(C/\Omega) \cdot \Theta\, d\tau$ depending on the full history $\{C(x,\tau)\}_{\tau \in [0,t]}$, not just the current value $C(x,t)$.
History matters, not just its summary. Two people at the same point in life but with different histories will respond differently to the same challenge. Significant experiences persist in their influence regardless of how long ago they occurred—muscle memory doesn't fade just because it's old. This is non-Markovian dynamics: the future depends on the entire integrated past, not merely the present state.
| Property | CRR (Coherence-Weighted) | Recency-Weighted |
|---|---|---|
| What determines weight? | Coherence at each moment: $\exp(C(\tau)/\Omega)$ | Time since event: $e^{-\lambda(t-\tau)}$ |
| Ancient high-coherence moments | Fully preserved (weight $= \exp(1/\Omega^2)$) | Exponentially forgotten |
| Recent low-coherence moments | Low weight (near 1) | High weight (recent) |
| Philosophical alignment | Bergson: memory as continuous presence | Standard decay models |
| Empirical match | Muscle memory, trauma, skill retention | Short-term forgetting curves |
$\Omega$ appears in both the rupture condition ($C \cdot \Omega = 1$ triggers transformation) and the regeneration weighting ($\exp(C/\Omega)$ determines memory access). This unity connects two fundamental questions:
Large $\Omega$ (flexible, permeable boundary): Low rupture threshold ($C^* = 1/\Omega$ is small). Frequent micro-ruptures—the system reorganises readily. Moderate contrast in memory weighting—history is broadly accessible.
Small $\Omega$ (rigid, precise boundary): High rupture threshold ($C^* = 1/\Omega$ is large). Rare but significant ruptures—the system accumulates extensively before transforming. Extreme contrast in memory weighting—only peak-coherence moments survive.
The brain constantly balances two opposing forces: excitation (E) and inhibition (I). Tucker, Luu, and Friston (2025) show that consciousness emerges when these forces are perfectly balanced—at criticality.
CRR proposes that E and I map onto $\Omega$. When $C \cdot \Omega = 1$—accumulated evidence saturates the system's capacity—the inside matches the outside. This is the write window, where early LTP can occur and experiences become memories.
CRR: Coherence accumulates to $C^* = \pi \approx 3.14$ before rupture. Moderate cycle duration. Memory sharply peaked on high-coherence moments.
CV prediction: $1/(2\pi) \approx 0.159$. Systems: heartbeat, neural bistability, flame flicker.
| System | Dorsal (Papez) | Ventral (Yakovlev) |
|---|---|---|
| Control Mode | Excitatory feedforward | Inhibitory feedback |
| Sleep Consolidation | REM | NREM |
| CRR Mapping | Higher $\Omega$ (flexible) | Lower $\Omega$ (precise) |
At rupture: E and I are balanced, a standing wave resonance forms (~100–200 ms), LTP can occur, the pattern is inscribed. The rupture boundary $\delta(\text{now})$ is the moment of maximal information transfer between past (coherence) and future (regeneration). This is the neural mechanism for James's "perchings."
The FEP (Friston, 2010; 2019) proposes that living systems survive by minimising "free energy"—the mismatch between what they expect and experience. CRR provides the temporal dynamics that the FEP presupposes but leaves unspecified: when do beliefs update? How does accumulated history shape reconstitution?
The FEP's primary temporal apparatus is generalised coordinates of motion (Friston, 2008)—a vector of higher-order time derivatives that encodes local trajectory information. This is elegant for continuous dynamics within a regime, but it remains fundamentally local in time: each state depends only on its current generalised coordinates, preserving the Markov property. Biehl, Pollock & Kanai (2021) identified technical difficulties with this formulation.
The FEP's path integral formulation (Friston, 2019) scores the plausibility of entire trajectories, but still does not specify when a system must abandon one regime for another, nor how the transition draws on accumulated history. The FEP tells you that a system at nonequilibrium steady state will look as if it is performing inference. It does not tell you the timing of the inference, or the moment at which the current model is exhausted.
| FEP Provides | CRR Adds |
|---|---|
| Markov blanket: a spatial boundary (internal $\perp$ external | blanket) |
Dirac delta: a temporal boundary (future $\perp$ past | present). The rupture moment $\delta(\text{now})$ serves the same conditional-independence role in time that the blanket serves in space. |
| Dynamics within a regime (VFE minimisation, predictive coding, active inference) |
Transitions between regimes: $C \cdot \Omega = 1$ specifies exactly when inference is exhausted and the system must reorganise. This is the Cramér-Rao bound that underlies the FEP's own information geometry, now applied as a stopping condition. |
| Markovian dynamics: each state depends on the current state (or generalised coordinates) | Non-Markovian accumulation: $C(x,t) = \int\mathcal{L}(x,\tau)\,d\tau$ integrates the full history. The present depends not on the previous state but on the entire accumulated past. Regeneration via $\exp(C/\Omega)$ weights this history exponentially. |
The FEP's precision parameter (inverse variance, $\pi = 1/\Omega$) maps directly to CRR's $\Omega$. Where the FEP uses precision to weight prediction errors, CRR uses its reciprocal $\Omega$ to set the rupture threshold and memory depth. The frameworks share the same information geometry; CRR adds the temporal completion.
In Friston et al. (2025) "Active Inference and Artificial Reasoning," an "aha moment" occurs when evidence accumulates until confidence exceeds a threshold, triggering Bayesian Model Reduction (BMR). CRR's Rupture is the same phenomenon, given a precise temporal criterion: $C \cdot \Omega = 1$.
| CRR Concept | FEP Concept | Mapping |
|---|---|---|
| Coherence $C(x,t)$ | Accumulated evidence since last update | $C \;\leftrightarrow\; \log p(D_{\text{new}}|m)$ |
| Variance $\Omega = \sigma^2 = 1/\pi$ | Inverse precision | $\Omega \;\leftrightarrow\; 1/\pi$ |
| Rupture threshold $C^* = 1/\Omega$ | Model selection threshold | $C^* \;\leftrightarrow\; \pi$ (precision) |
| Rupture $\delta(t-t_*)$ | BMR / Occam's Razor | $\{C \cdot \Omega \geq 1\} \;\leftrightarrow\; \{\max_m p(m|D) > \theta\}$ |
| Regeneration $R[\varphi]$ | Posterior after model selection | Coherence-weighted history $\;\leftrightarrow\;$ Bayesian posterior |
Rupture is the "aha moment." Both frameworks describe the discrete transition from uncertainty to commitment when accumulated evidence warrants model selection. CRR adds precision: the transition occurs at $C \cdot \Omega = 1$, with parameter-free predictions about its timing variability ($\text{CV} = \Omega/2$).
CRR does not compete with the FEP's account of what beliefs update to (free energy minimisation), nor with the detailed neural process theories (predictive coding, active inference) that implement it. CRR addresses the temporal structure of these processes: when transitions occur, how history shapes reconstitution, and why the timing variability takes the specific values it does. The FEP provides the engine; CRR provides the clock.
CRR's central empirical claim is that the coefficient of variation (CV) of inter-event intervals is determined by symmetry class alone, with no free parameters:
where $\varphi$ is the phase (in radians) traversed during one coherence accumulation cycle.
| Symmetry | Phase to Rupture | $\Omega$ Value | $C^*$ Value | CV Prediction |
|---|---|---|---|---|
| $\mathbb{Z}_2$ (bistable/flip) | $\pi$ (half-cycle) | $1/\pi \approx 0.318$ | $\pi \approx 3.14$ | $1/(2\pi) \approx 0.159$ |
| $SO(2)$ (rotational) | $2\pi$ (full-cycle) | $1/2\pi \approx 0.159$ | $2\pi \approx 6.28$ | $1/(4\pi) \approx 0.080$ |
These predictions have been tested across 100+ systems in 30+ domains, including:
| Domain | Example Systems | Class | Status |
|---|---|---|---|
| Neural oscillations | EEG alpha, theta, gamma; sleep spindles | $SO(2)$ | Validated (N=109, two independent datasets) |
| Cardiac rhythms | Heart rate variability, R-R intervals | $\mathbb{Z}_2$ | Confirmed |
| Flame dynamics | Candle flicker, plasma oscillations | $\mathbb{Z}_2$ | Confirmed |
| Bacterial division | E. coli inter-division intervals | $\mathbb{Z}_2$ | Confirmed |
| Stellar pulsation | Cepheid variables, RR Lyrae | $SO(2)$ | Confirmed |
| Calcium signalling | Intracellular Ca²⁺ oscillations | $SO(2)$ | Confirmed |
| Reaction times | Human simple RT, choice RT | $\mathbb{Z}_2$ | Confirmed |
| Population ecology | Predator-prey cycles, bloom intervals | $SO(2)$ | Confirmed |
| Laser dynamics | Mode-locked laser pulse trains | $SO(2)$ | Confirmed |
| Gastric waves | Slow-wave rhythm | $SO(2)$ | Confirmed |
| Saltatory growth | Infant growth spurts (Lampl & Johnson) | $\mathbb{Z}_2$ | Confirmed (11/11 individual predictions) |
| Geophysics | Geyser eruptions, seismic cycles | $\mathbb{Z}_2$/$SO(2)$ | Confirmed |
The full validation table with 100+ systems, observed CVs, predictions, and references is available at CRR Benchmarks.
Tested across PhysioNet EEGBCI and MPI-LEMON datasets (N = 109 total):
Systems fall into three empirical classes based on their relationship to the CRR predictions:
| Class | Description | CV Relative to $\Omega/2$ | Match Rate |
|---|---|---|---|
| Class A | Autonomous stochastic (matches CRR) | $\text{CV} \approx \Omega/2$ | 89% |
| Class B | Deterministic/regulated (precision oscillator) | $\text{CV} < \Omega/2$ (suppressed) | 85% |
| Class C | Noise-dominated/volitional | $\text{CV} > \Omega/2$ (inflated) | 85% |
Overall classification accuracy: 86%, approximately 10.6$\sigma$ significance, with zero directional reversals. Systems that deviate from the prediction tell you something specific about their regulatory architecture.
The framework makes specific, falsifiable commitments:
CRR follows a pre-registration discipline: predictions are formally registered before touching data. Deviations are diagnosed rather than hidden. The 132-system CV predictions table, three-class framework, and all EEG results were pre-registered. Honest null testing is a core commitment—e.g. the lemniscate hypothesis in atomic CV analysis was falsified and reported as such, not rescued.