dx/dt = σ(y−x) dy/dt = x(ρ−z)−y dz/dt = xy−βz ∀ε>0 ∃δ>0 π₁(S¹) ≅ ℤ ∃! F : C → D ⊢ theorem Hom(A,B) σ=10, ρ=28, β=8/3 lim f(x) = L ∇·F = 0 ker(φ) ⊆ G colim F ≅ G Ext¹(A,B) H*(X;ℤ) π₀(X) ≅ components

Formal Verification × AI Safety

In the most unpredictable systems,
stable attractors exist.

We find them. We prove them.
Chaos Butterfly builds the mathematical verification layer
for AI systems that cannot afford to be wrong.

Request Early Access Our Philosophy
scroll

Order hidden
within chaos

Most think of the butterfly as the storm it causes. We see the attractor it traces — a stable structure inside a chaotic system.

The Lorenz attractor is shaped like a butterfly's wings. Within an unpredictable nonlinear system, it describes the fixed structure toward which trajectories converge — the invariant beneath the turbulence.

This is what we do. Not eliminate chaos. Find and prove the stable structures within it.

And we believe every person who works here is the butterfly — the one whose small, precise action reshapes the world.

What we build

01

Formal Verification

We translate AI system behaviors into mathematical propositions and prove them using Lean 4 — the same language used to verify results in algebraic topology and homotopy theory.

02

Safety Certification

For high-stakes AI deployments — autonomous vehicles, smart city infrastructure, medical systems — we provide proof certificates that regulators and governments can trust.

03

Proof Infrastructure

We are building the verification layer that sits between an AI system and its deployment authorization. Not testing. Not auditing. Mathematical proof.

When AI systems fail,
the cost is measured in lives

The gap between "tested" and "proven"

EU AI Act

Requires verifiable safety for high-risk AI — no formal proof standard yet exists

0

Companies offering proof-based AI certification at scale

Testing tells you a system worked under observed conditions. Proof tells you a system cannot fail under specified conditions. These are not the same thing.

Governments deploying AI in critical infrastructure cannot accept "it passed our tests." They need a mathematical guarantee. We provide that guarantee.

theorem safety_invariant (sys : AISystem) :
     state ∈ sys.reachable,
    sys.safe state := by
    apply Lyapunov.stable_attractor
    exact sys.verified_proof

Built by someone who
speaks both languages

Most people who talk about AI safety are either engineers without deep mathematical foundations, or mathematicians without proximity to deployed AI systems.

Hongwei Wang (王鸿玮) is neither. His research bridges algebraic topology and formal verification in Lean 4 — one of the most demanding technical combinations in mathematics today.

In early 2026, he independently organized the Lean 4 Formalization Seminar at XJTLU — teaching undergraduates to formally prove results in abstract algebra and category theory using machine-verifiable proof.

The insight behind Chaos Butterfly: the tools to formally verify mathematical theorems already exist. What does not yet exist is the infrastructure to apply them to AI systems at the scale that regulators and governments require.

Academic Homepage ↗

Let's find the
attractor together

We are in early conversations with government partners and AI companies seeking formal verification capabilities. If you are building systems where failure is not an option, we want to hear from you.

hello@chaosbutterfly.ai