The Inevitable Transformation
Software is changing faster than we've seen in 25 years. Machine learning isn't just becoming important, it's becoming essential. Every application, every website, every digital interaction will soon expect intelligent features as standard.
For millions of PHP developers who power the web, this creates an existential challenge: stay relevant in an AI-first world or risk obsolescence.
The Stakes: This isn't about technology preferences, it's about livelihoods. Mortgages, families, careers built on PHP expertise.
The Current Reality
- PHP dominates the web: 78% of websites use PHP as their server-side language
- ML is becoming essential: Every modern application needs intelligent features
- The gap is growing: No first class machine learning support for PHP
- The choices are terrible: Burden your stack with microservices, API calls, or other inefficient methodology (eg. FFI), or switch stacks.
=1 gflops)
1000×1000 float32 matrices
#### CPU Acceleration
8x
speedup with AVX2
#### GPU Acceleration
??
speedup with CUDA
#### Hybrid Parallelism
100%
CPU/GPU utilization
### Complementary, Not Competitive
This is about finding PHP a place in an AI-first software engineering world, the one that's coming ... not only does AI need to be a first class citizen in the _minds of developers_, but also in the _languages they use_:
* **Python:** Training, research, model development
* **PHP:** Web inference, production serving, real-time processing
* **ONNX:** The bridge between training and inference
_This isn't about replacing Python for training!_
Technical Excellence
--------------------
This isn't a proof of concept, it's production-ready infrastructure built to exacting standards.
### Core Architecture
**PHP API Layer** - Clean, decoupled API: ORT\\Tensor, ORT\\Math functional namespace, ORT\\Model, ORT\\Runtime
**Math Library**
Frontend: Dispatch, scheduling, and scalar fallbacks (always available)
Backend: Optimizations (CUDA, WASM, NEON, RISCV64, AVX512, AVX2, SSE4.1, SSE2)
**ONNX Integration**
ORT\\Model: Loading, metadata, and management
ORT\\Runtime: Inference
**ORT\\Tensor** - Immutable API; always available
### Key Technical Innovations
* **[Immutable Tensors:](#)** Zero-copy, lock-free data sharing, efficient memory usage, and predictable performance
* **[Dual Storage Class Tensors:](#)** ORT\\Tensor\\Persistent survive request cycles (and may be shared among threads, ie FrankenPHP), ORT\\Tensor\\Transient are request local
* **[Memory Management:](#)** Aligned allocator and optimized memcpy for maximum performance potential
* **[Thread Pool:](#)** Dedicated Slot scheduling implementation with alignment awareness
* **[SIMD Optimization:](#)** Runtime detection with thread pinning ensures stability and predictability at scale
* **[GPU Optimization:](#)** Optional GPU acceleration ensures saturation of all available parallel hardware
* **[Type System:](#)** Schemas extracted from NumPy runtime; no guesswork, perfect compatibility
* **[Type Promotion:](#)** Automatic conversion between types for seamless integration and maximum predictability
* **[Zero Overhead Optimizations:](#)** Backend silently (opaque to the frontend) optimizes dispatch
* **[Call Site Scaling:](#)** ORT\\Math\\scale provides a high degree of control over scaling at the call site
* **[Modular Design:](#)** Math and ONNX systems are independent - use either or both as needed
* **[Flexible Tensor Generation:](#)** ORT\\Tensor\\Generator provides flexible generation (lazy loading, random, etc)
### Performance Benchmarks

All benchmarks are reproducible and available in the [bench](https://github.com/krakjoe/ort/tree/develop/dist/bench) directory.
### Getting Started
wget https://.../v1.22.0/onnxruntime-linux-x64-1.22.0.tgz -O onnxruntime-linux-x64-1.22.0.tgz sudo tar -C /usr/local --strip-components=1 -xvzf /onnxruntime-linux-x64-1.22.0.tgz sudo ldconfig sudo apt-get install pkg-config phpize ./configure --enable-ort make sudo make install extension=ort.so
get();
### Real-World Impact
* **Developer Retention:** Millions of PHP developers stay relevant
* **Web Evolution:** Every PHP website becomes a potential ML application
* **Accessibility:** ML inference becomes as common as database queries
* **Innovation:** New applications emerge when barriers are removed
### Join the Evolution
This isn't about revolution, it's about evolution. PHP has always adapted to stay relevant. From simple scripts to enterprise applications, and now to intelligent applications.
**The Vision:** Every corner of the web becomes capable of intelligent behavior. PHP developers don't need to choose between staying with PHP and building the future, they can do both.
Machine learning inference isn't a luxury anymore, it's becoming essential infrastructure. This is PHP's answer to the AI revolution.