Terraphim
v1.16.0
Terraphim
Capabilities rust wasm performance benchmarks

Near-Native Speed, Everywhere

Terraphim AI is written in Rust and compiles to WebAssembly. This is not a marketing choice -- it is a performance requirement. Knowledge graph inference runs in 5 to 10 nanoseconds. Pipeline processing completes in hundreds of milliseconds. Search queries resolve in sub-millisecond time.

Why Rust

Rust provides memory safety without garbage collection, zero-cost abstractions, and thread safety guaranteed at compile time. For a system that builds and traverses knowledge graphs with thousands of nodes and edges, these guarantees translate directly into reliable performance without unexpected pauses or memory leaks.

Why WebAssembly

WebAssembly allows the same Rust codebase to run:

  • In your browser -- no installation required, instant access
  • On your desktop -- native performance as a standalone application
  • On your server -- same code, same results, different deployment target
  • On mobile -- lightweight enough for devices with limited resources

One codebase, compiled to multiple targets, with no performance compromise.

Performance Numbers

The journey from The Pattern to Terraphim AI tells the performance story:

MetricTraditional ML PipelineThe PatternTerraphim AI
Data processing for training6 days6 hoursHundreds of milliseconds
Inference latencySecondsUnder 2 ms5-10 nanoseconds
RAM footprintGigabytesHundreds of MB15-20 MB
GPU requiredYesNoNo

Benchmarking

Terraphim includes a comprehensive benchmarking framework covering:

  • Knowledge graph construction time
  • Search indexing throughput
  • Query evaluation latency
  • Aho-Corasick pattern matching speed
  • End-to-end performance across all components

All benchmarks run on standard hardware -- no GPU, no specialised accelerator.

Low Footprint

A typical Terraphim knowledge graph occupies approximately 15-20 MB of RAM. There are no float vectors, no dense embeddings, no GPU memory allocation. This means Terraphim runs comfortably on a laptop, a Raspberry Pi, or a modest cloud instance.

Learn More