My research develops rigorous mathematical frameworks for understanding, predicting, and preventing AI system failures — and then I build them. This work bridges differential geometry, category theory, and machine learning to provide explanatory rather than merely empirical accounts of how transformer architectures reason — and fail.
Every framework I build includes compositional error budgets, auditable failure modes, and explicit validity assumptions. I ship production systems validated against real-world data — from 2.3TB/day telemetry pipelines at NASA to adversarial AI platforms at IBM X-Force Red.
The unifying thread is the Davis Manifold: a Riemannian state space with bounded geodesic-Euclidean distortion that provides reusable blueprints for safety-critical ML. From semantic coherence in transformers to constraint satisfaction on GPUs, from viral surveillance to cancer detection — geometry is the common language.
Self-taught: differential geometry, spectral theory, category theory, gauge theory
Python, Rust, C++, SQL, Bash
PyTorch (rebuilt from scratch as DavisTensors), NumPy, SciPy, JAX
Kubernetes, gRPC, AWS, Azure, CUDA
Differential geometry, spectral theory, category theory, gauge theory
25 U.S. provisional patent applications filed spanning geometric computation, security, AI, and fundamental physics.