Benchmark Results

Single-thread tests on Apple Silicon (M-series) show Ordo's exceptional performance far exceeding industry standards.

Execution Latency Comparison (µs)

lower is better
Traditional Engine (Typical) ~1,000 µs
Ordo Rule Engine 1.63 µs

* ~600x performance boost, making real-time business decisions invisible.

Coming Soon

Schema-Aware JIT — Available Now

Compiles numeric rules to native machine code via Cranelift. 20–30x faster than the bytecode VM in compute-heavy scenarios.

Evaluation Latency ~26 ns
Peak Throughput 76M ops/s
Performance Boost 30x faster

Multi-tier Execution Architecture

From development debugging to extreme performance needs, Ordo covers all scenarios.

Level 1

AST Tree-walk

Direct interpretation, perfect for development and one-off rules.

~1.5 µs
Level 2

Bytecode VM

High-efficiency bytecode, default for production, balances flexibility and speed.

~830 ns
Level 3

Vectorized Execution

Columnar batch processing, massive throughput for large-scale data validation.

Batch Optimization
Level 4

Schema-Aware JIT

Compiles to native machine code, compressing latency to nanoseconds.

~26 ns

79-211 ns

Expression Eval Time

54,000+ QPS

HTTP API Single-thread QPS

3.9 ms

HTTP API P99 Latency

0 Alloc

Hot Path Heap Allocations

Ready to boost your business decisions?

Join us and experience the revolution of development efficiency with performance and visual editing. Ordo is open sourced on GitHub.