Revibing is the use of modern agentic coding to *systematically* refactor an existing codebase into a new shape, vibe and architecture, with as much of the work as possible automated by AI-driven workflows.
Where Vibe Coding is you and an assistant jamming on new code, revibing is you, an army of agents and a pile of tests slowly bending a legacy system into something cleaner, smaller or more aligned with a new commons, without rewriting everything from scratch.
Revibing is not a naïve “LLM, please rewrite my app” move. It is an orchestration pattern. Static analysis, codemods, AST transforms, golden-master tests and AI code agents are wired together into repeatable workflows that can be run, diffed, rolled back and re-run until the new vibe sticks.
# Modern Techniques Behind Revibing Modern revibing usually stands on three legs: **understanding**, **transformation** and **verification**. Understanding is handled by a mix of static analysis and code-aware AI. Tools build symbol graphs, call graphs and dependency maps, then agents summarise them into human-readable architecture docs, endpoint catalogs and “how does X actually work” narratives. Embedding indexes of the repo allow an assistant to find all usages of a concept semantically, not just by text search. Transformation is done with AST-level codemods and AI-generated patches. Deterministic engines like codemods and OpenRewrite provide precise, structured edits across thousands of files, while LLMs propose or refine transformations where patterns are irregular or context-dependent. Some modern “codemod 2.0” frameworks explicitly combine static detection with LLM-driven edits so that the right technology is used for each part of the job. Verification closes the loop. Golden-master tests, approval tests and existing integration suites are run after each wave of changes. Differential testing and record-and-replay capture pre-refactor behaviour and compare it to post-refactor behaviour. Agents interpret failing tests and logs, propose fixes or rollbacks, and in some setups can automatically open and iterate on pull requests until the suite goes green again.
# Examples Of Revibing As A Strategy Large organisations have been doing proto-revibing for years with non-AI codemods. Facebook’s codemod and jscodeshift patterns show how AST tools can automate mass refactors, like API migrations or JSX syntax upgrades, across huge JavaScript codebases with human approval at the end. Companies like Google have used tools such as Clang-based refactorers to update C++ APIs across millions of lines of code, turning tasks that would have taken months of manual editing into scripted, repeatable transformations with safety guarantees from the compiler. More recently, commercial platforms have started mixing these deterministic techniques with LLMs. Auto-refactoring platforms built on OpenRewrite can run refactors across thousands of repos at once, and “codemod 2.0” frameworks integrate rule-based detection with LLM-powered code generation to handle tricky cases where patterns are fuzzy. AI agents such as self-hostable coding assistants and continuous-AI tools now let you trigger refactor workflows from your IDE, CLI or CI. You can point an agent at a repo, say “upgrade this framework, extract these modules, preserve all tests”, and watch it run codemods, propose patches and iterate on failures while you stay in the loop as reviewer. These are not yet magic “rewrite my monolith into microservices” buttons, but they are real, production-tested examples of revibing: using automated refactoring and AI-assisted patches to lift big, boring transformations out of human heads and into repeatable workflows.
# What Is Easy To Revibe Revibing is easiest when the change is **local, mechanical and well observed**. Renaming and reshaping APIs is a natural fit. If you can express “old call pattern → new call pattern” as an AST rule, codemods can migrate thousands of call sites. LLMs then handle edge-cases, missing imports and docstring updates where patterns are slightly irregular. Structural refactors inside one service, such as extracting modules, flattening deep inheritance, or moving from callbacks to async/await, are also tractable. Static analysis finds the call graph, codemods perform the transformations, and tests confirm that behaviour is unchanged. UI library migrations, like moving from one component library to another, are often revibable if there is a clear mapping from old props and components to new ones. Automated transforms can do the bulk replacement, while agents help with styling and layout tweaks that still need a human eye. Cross-version language upgrades are classic revibing territory. Python 2 to 3, JavaScript syntax modernisation, or framework major version upgrades can be codified as repeatable transformations with tests as safety nets. AI agents can then mop up the weird corners that generic migration tools miss. In all these cases, a vibe-coding workflow pairs well with revibing. You ask an agent to sketch the transformation rules, run them on a slice of the code, inspect the diffs together, refine, then roll out to the full codebase once it feels right.
# What Is Hard To Revibe Revibing becomes hard when behaviour is **implicit, emergent or socio-technical** rather than local and mechanical. Subtle business rules that live in a tangle of conditionals, feature flags and database quirks are difficult for tools to understand purely from static analysis. An AI can propose a cleaned-up version, but without very strong tests and domain knowledge there is real regression risk. Performance-critical sections, tight loops, caching layers and concurrency constructs are hard to touch automatically without changing latency or throughput in surprising ways. Static tools can warn, but you need realistic benchmarks and load tests to know whether a refactor has quietly broken your SLOs. Security, privacy and compliance concerns resist blind transformation. Automated refactors might accidentally weaken invariants around input validation, sanitisation or access control. AI-driven tools can help identify vulnerabilities, but wholesale structural changes still demand careful human review. Cross-cutting concerns like logging, observability and feature flag behaviour can be encoded in patterns, but their *meaning* is tied to operational practice. Removing or relocating them at scale may break dashboards, alerts or incident workflows in ways that no unit test sees. Finally, social features and UX “vibe” are hard to revibe purely mechanically. You can rebuild the same endpoints and screens, but you cannot automatically recreate a community, a moderation culture or a set of expectations. Those have to be grown again, with the code merely making them easier or harder.
# Revibing As A Practice Revibing, in the agentic sense, is a disciplined loop. You stand up the old system as a golden master. You instrument it with tests, record-and-replay and observability. You use static tools and AI agents to map its structure and behaviour. You design small, automated transformations and run them under the gaze of that test harness. You keep iterating until the codebase expresses the new vibe you want, in a shape that is easier to maintain, govern and share. The promise of revibing is that you do not have to choose between living forever in a decaying legacy system or jumping into a risky full rewrite. With modern codemods, ActivityPub-aware agents and vibe-first workflows, you can steer an existing ship, a little at a time, towards a different future.
# See