Developer midudev has released CanIRun.ai, a free browser-based tool that analyzes computer hardware to determine which AI models can run locally. The tool went viral on Hacker News on March 13, 2026, receiving 899 points and 235 comments, addressing a common frustration for developers evaluating local AI deployment before downloading multi-gigabyte models.
How the Hardware Detection Works
CanIRun.ai performs all analysis client-side for privacy, sending no data to external servers. The tool creates a hidden WebGL canvas and queries the WEBGL_debug_renderer_info extension to identify exact GPU names and vendors. It maintains a database of approximately 40 GPUs from NVIDIA, AMD, and Intel, plus 12 Apple Silicon chips, with each entry containing VRAM capacity and memory bandwidth — the two critical factors for local AI model performance.
Compatibility Scoring System
The platform scores models on a 0-100 scale by evaluating three factors: inference speed (tokens per second), memory usage percentage, and model size. The VRAM calculation follows the formula: VRAM (GB) = Parameters × Bits per weight ÷ 8 ÷ 1024³ + Overhead. For example, a 70 billion parameter model quantized at Q4_K_M requires approximately 35 GB of VRAM plus overhead.
Extensive Model Database Coverage
CanIRun.ai covers major open-source AI architectures including Llama, Qwen, Mistral, Phi, Gemma, and DeepSeek variants, spanning from compact 1 billion parameter models to large 120+ billion parameter options. The platform tracks recent releases including Alibaba's Qwen 3.5 (9 billion parameters, February 2026) and various mixture-of-experts architectures. Top recommendations for local deployment in 2026 include Llama 3, Phi-3 Mini, DeepSeek Coder, Qwen 2, and Mistral NeMo.
Technical Implementation and User Experience
Built with the Astro framework, the tool implements view-transition animations and responsive component architecture. It includes schema.org markup defining it as a "DeveloperApplication" and offers flexible grid and list views with filtering capabilities. All hardware detection and compatibility analysis happens in the browser, ensuring user privacy while providing instant results.
Addressing a Real Developer Need
The tool's viral success reflects a genuine gap in the local AI ecosystem. Developers previously needed to manually research hardware requirements, often discovering incompatibility only after downloading models that could be dozens of gigabytes in size. CanIRun.ai provides immediate clarity on feasibility, helping developers make informed decisions about local AI deployment without wasting time or bandwidth.
Creator Miguel Ángel Durán emphasized the tool's privacy-first approach, noting that it analyzes hardware directly in the browser without requiring installation, registration, or data transmission to external servers.
Key Takeaways
- CanIRun.ai received 899 points and 235 comments on Hacker News within hours of launch on March 13, 2026, indicating strong developer interest
- The tool analyzes approximately 40 GPU models and 12 Apple Silicon chips entirely client-side, sending no data to external servers
- Compatibility scoring uses the formula VRAM (GB) = Parameters × Bits per weight ÷ 8 ÷ 1024³ + Overhead, with a 70B Q4_K_M model requiring roughly 35 GB
- The database covers models from 1B to 120+ billion parameters across architectures including Llama, Qwen, Mistral, Phi, Gemma, and DeepSeek
- Built with Astro framework, the tool provides instant hardware compatibility results without requiring downloads or installation