Stop letting ANE go unused. Compile any model, target ANE directly, get 10x efficiency.
Apple Neural Engine is on every Apple device. Yet most developers ignore it. Why?
momo-kiji brings ANE into the open. Compile any model, target ANE directly.
Bypass CoreML. Compile directly to ANE.
Specialized hardware acceleration on every Apple device.
Target both platforms with a single toolchain.
ONNX, PyTorch, TensorFlow input support.
Automatic INT8 and FP16 quantization.
Simple, intuitive Python interface.
92% quality at $5-10/month with pyramid architecture.
Learn more →
# Install
pip install momo-kiji
# Compile a model
momo-kiji compile model.onnx \
--target ane \
--output model_ane.mlmodel
# Use in your app
import momo_kiji as mk
model = mk.load("model_ane.mlmodel")
output = model.predict(input_data)momo-kiji is built on the latest peer-reviewed research in neural engine optimization. Our compiler architecture and optimization strategies are informed by cutting-edge academic work.
Latest research revealing ANE architecture, optimization opportunities, and compiler design principles that power momo-kiji.
Start with the documentation or jump into the GitHub repository.