All posts
April 14, 2026 8 min read

Introducing aricode v0.5.0

The biggest release yet for the behavior-first coding agent. Autonomous dreaming, a full Node SDK, smarter test analysis, and a hooks system that lets you wire aricode into anything.

Today we are shipping aricode v0.5.0 -- the most ambitious release since the project began. This is not an incremental update. It is a rethinking of what a local-first coding agent can do when it has real memory, real autonomy, and real integration points.

If you have not heard of aricode before: it is a behavior-first coding agent designed for local models. You run it on your machine with Ollama or any OpenAI-compatible endpoint. It reads your codebase, builds a persistent knowledge graph, and works alongside you in a terminal REPL. No cloud account required. No data leaves your network.

v0.5.0 takes that foundation and builds six major features on top of it.

Autonomous Dreaming

This is the headline feature. Start a dream session before you leave for the night, and aricode will explore your entire codebase autonomously -- mapping architecture, tracing patterns, spotting issues, and imagining where the code could go next.

Dreaming happens in three phases:

When you come back in the morning, you have a journal, a futures tree, extracted conventions, and an updated architecture schema waiting for you. It is like having a senior engineer do a deep code review while you sleep.

The Node SDK

aricode is no longer just a CLI. With v0.5.0, you can embed it directly in any Node.js application using the new SDK.

import { createAricode } from 'aricode/sdk';

const ari = await createAricode({
  model: 'qwen2.5-coder:32b',
  cwd:   '/path/to/project',
});

for await (const event of ari.run('Refactor the auth module')) {
  console.log(event.type, event.data);
}

The SDK gives you everything the CLI has, but programmable:

This opens the door to building custom dev tools, CI integrations, and IDE plugins on top of aricode. The SDK reference has full documentation.

Behavioral Compilation

When tests fail, most tools dump raw output and hope you can figure it out. aricode takes a different approach: it compiles test failures into behavioral specifications.

Each failing assertion becomes a test witness -- a structured description of what a function is supposed to do. Instead of "expected '+14155550100' but got '(415)555-0100'", you get a specification: "formatPhone should transform a parenthesized US number into E.164 format."

aricode then uses the dependency graph to perform root cause analysis. It identifies which function, if fixed, would resolve the most failures at once. Across fix iterations, it tracks which witnesses are resolved, which regressed, and whether the overall trajectory is improving.

The result is that fixing test failures goes from "read wall of red text and guess" to "here is exactly what is wrong, here is the root cause, and here is the best place to start fixing it."

Knowledge Graph Improvements

The persistent knowledge graph -- the core of aricode's memory -- gets several upgrades in v0.5.0:

The knowledge graph is what makes aricode fundamentally different from stateless AI tools. It remembers what it has learned, and every session builds on the last.

Hooks System

v0.5.0 introduces a hooks system that lets you wire your own automation into aricode's lifecycle. Hooks are shell commands triggered at specific points:

Configuration is a single JSON file. No plugins to install, no API to learn. If it runs in a shell, it works as a hook.

Edit Intelligence

Every code change aricode makes now triggers a post-edit analysis pipeline. Before a change is considered done, aricode automatically:

This means aricode catches its own mistakes before they become your problem. It is self-correcting in a way that matters.

Why Local-First Matters

Every major AI coding tool today requires sending your code to someone else's servers. For many teams -- especially those working on proprietary software, in regulated industries, or simply with strong opinions about data ownership -- that is a dealbreaker.

aricode runs entirely on your machine. Your code never leaves your network. You can use Ollama with open-weight models, connect to a self-hosted inference server, or use any OpenAI-compatible cloud provider if you choose to. The point is that the choice is yours.

Local-first does not mean limited. With models like Qwen 2.5 Coder 32B running on consumer hardware, local inference is now genuinely capable. aricode is built to get the most out of these models -- through persistent context, structured tool use, and behavioral understanding of your code.

Get Started

Install aricode with a single command:

$ curl -fsSL https://install.aricode.dev | sh

Then launch it in any project directory and run /init to build the knowledge graph. Run /dream to start your first autonomous exploration.

For the full setup guide, head to the documentation. To explore every feature in depth, see the features page. And follow the project on GitHub — full source releases at v1.

Ready to try it?

aricode is free, open source, and runs entirely on your machine. Install it in 30 seconds and give your local model superpowers.

Read the docs →