Vidai: Teaching Machines Arithmetic Without Teaching Arithmetic
A neural system that learns to parse mathematical notation rather than compute it, achieving 90%+ accuracy by separating structure understanding from calculation.
AI, tech, and project case studies
A neural system that learns to parse mathematical notation rather than compute it, achieving 90%+ accuracy by separating structure understanding from calculation.
When you train a neural network, you're running a dynamical system that carves out a representation space. ToDACoMM measures the topology of what gets carved, revealing a striking divide between encoders and decoders, and now extends to MLPs and large-scale transformer analysis.
A comprehensive look back at my year in AI - scaling up at work, building the Praval agentic AI framework, Vajra Search, Tlon mathematics, ToDACoMM, exploring mechanistic interpretability, honest research failures, and reflections on curiosity vs. accomplishment.
Applying perturbation analysis and Lyapunov exponents to neural network training. Dense networks converge; transformers diverge. The architecture determines the stability.
After months of development, Vajra BM25 achieves ~1.2-1.3x faster latency than BM25S while maintaining competitive accuracy. I share what I learned building it and benchmark results across BEIR and Wikipedia datasets.
Building a mathematical framework where processes are primitive and objects emerge as stable patterns - with 20 axioms, rigorous proofs, and simulations of dynamical systems that demonstrate the core insight: stability is special, not generic.
Building a privacy-focused research assistant with multi-agent architecture: architectural decisions, local-first design, and lessons learned from production deployment.
Why traditional BI breaks on factory floors, and how multi-agent architecture enables conversational analytics: architectural decisions, event-driven coordination, and production lessons.
Over the last several months, I have been building and developing Praval, a Pythonic agentic AI framework for multi-agent system development inspired by coral ecosystems.
Peripatetic wandering is not a phrase I would normally associate with work, and yet, as of the end of 2025, I do. The truth is that the work of the past isn't the work of today.
Many interesting narratives populate the philosophical discourse around artificial intelligence, not the least of which is the potential of artificial intelligence to replace humans in different endeavors.
I love it when side projects go off the rails in a good way. KayGeeGo started as an experiment in knowledge graph construction, and ended up as a full-featured, LLM-driven graph builder and explorer.
The current frontier of AI is dominated by powerful deep learning systems, but revisiting foundational mathematics reveals insights we may have overlooked in our rush to build.
2024 wasn't just another year in AI—it was a year that redefined the rules. From small models to agentic workflows, here's what shaped the landscape.
Introducing my space for exploring ideas across AI, aviation, geopolitics, philosophy, and culture
Over the last weekend I built out a service called 'Sangraha' (which means 'collection' in Sanskrit), which supports a large throughput and relatively low latency of data operations.