Skip to content
[ DOCUMENTATION ]

Documentation

Learn how to use Remoroo to run controlled engineering experiments on your codebase.

Overview

Remoroo enables you to define goals, generate safe patches, and measure outcomes automatically.

Start by defining a metric you want to optimize—build time, test coverage, bundle size, or any measurable outcome. Remoroo will propose changes, run tests, and iterate until the goal is met.

Core Concepts

Experiments

An experiment is a controlled attempt to improve a specific metric. Each experiment includes a goal, safety criteria, and success metrics.

Patches

Remoroo generates patches that modify your codebase. All patches are reviewed, tested, and can be accepted or rejected based on results.

Safety Gates

Before any patch is applied, it must pass safety gates including test suites, linting, and custom validation rules.

Workflow

  1. Define your goal and success metric
  2. Remoroo analyzes your codebase and generates patch proposals
  3. Patches are tested against your test suite
  4. Results are measured and compared to the goal
  5. Successful patches are accepted; failed ones are rejected and new proposals are generated

Quickstart

Get started with Remoroo in minutes. First, install the CLI:

npm install -g remoroo-cli

Initialize a new experiment configuration:

remoroo init
# Creates .remoroo/config.json

Define your experiment goal:

{
"goal": "reduce build time by 20%",
"metric": "buildTime"
}

Run your first experiment:

remoroo run

Glossary

Experiment
A controlled attempt to improve a specific metric through automated code changes, testing, and measurement.
Patch
A proposed code change generated by Remoroo to achieve an experiment goal.
Safety Gate
A validation checkpoint that patches must pass before being applied, including tests, linting, and custom rules.
Metric
A measurable outcome that defines experiment success, such as build time, bundle size, or test coverage.
Goal
The target value or improvement threshold for an experiment metric.
Iteration
A single cycle of patch generation, testing, and measurement within an experiment.
Decision
The accept or reject outcome of a patch based on test results and metric improvements.
Context Packing
The process of analyzing large codebases to understand dependencies and generate safe, context-aware patches.
Audit Trail
A complete record of all patches, test results, and decisions for an experiment.
Stop Criteria
Conditions that automatically halt an experiment, such as maximum iterations or safety gate failures.

FAQ