We're a team that set out to build a local-first consumer AI apps, but after thousands of users and 6 months, we realized the hardware and software aren't there yet. Running near-realtime workloads on consumer CPUs and GPUs can be too slow and drains battery life for most consumer hardware.
While some solutions exist for running local AI models on AI accelerators, most are closed source or only partially open, which we found frustrating. Rather than wait for others to solve this problem, we decided to tackle it ourselves and share our models and SDKs with everyone.
Join our Discord or checkout our models on Huggingface: