Unified Linting
I maintain my own collection of ESLint rules and flat configs. That means I have seen what lint migrations look like from the inside.
The move from ESLint 8 to ESLint 9 was painful in places, but I ended up happy with the result. Flat config made the model feel more correct — easier to reason about and easier to maintain.
Even so, the bigger problem did not go away.
Rules still need maintenance. Dependencies still drift. Some plugins move quickly, some move slowly, and some more or less stop moving. I had to drop support for eslint-plugin-jsx-a11y in my own setup, not because accessibility stopped mattering, but because the maintenance story around it was not strong enough. React rules are among the last to migrate officially. I do not think that is malicious — teams have limited time. But as a user and maintainer, it still bothers me.
The rule still makes sense. The intent is still valid. What breaks is the tool-specific packaging around it.
That pattern runs deeper than individual plugins. Today, rules are tightly coupled to the tool that runs them. Your ESLint config does not help Biome. Your Biome config does not help Oxlint. If you move stacks, or even upgrade a major version, you often end up re-expressing the same intent in a new format.
That is wasteful. As developers, we already know what we want to enforce. We have years of practice, conventions, and hard-earned lessons. Most of those ideas are easier to describe in natural language than in a plugin API. Writing, maintaining, and migrating custom lint rules is usually the expensive part — not deciding which rules matter.
AI makes this gap more obvious.
Large language models are very good at understanding natural language instructions. They are much worse at guessing intent from a pile of tool-specific config, plugin names, and obscure option objects. If the real source of truth is hidden inside ESLint plugins, then every migration to Biome, Oxlint, or the next fast linter becomes harder than it should be.
The shift
Modern linters are doing the right thing in one area: they are getting faster. They care about performance, parallelism, and low overhead. That matters.
But there is still a wall between how humans describe rules and how tools consume them.
Right now, the industry often solves that by building compatibility layers. A new linter adds support for ESLint plugins, or tries to emulate ESLint config behavior. That helps adoption, but it does not really solve the problem. It just extends the life of the old format.
What I want is a linter designed for the reality we are already in: developers write rules in human language, AI helps turn those rules into executable checks, and the linter handles caching, execution, and reporting in a predictable way.
What the source of truth should look like
I think the source of truth should be plain text files, probably Markdown with a bit of metadata.
This format is not even unusual. If you have read ESLint rule documentation before, you already know the shape of it. ESLint rules are documented in Markdown. They explain what the rule does, why it exists, and usually show correct and incorrect examples. In a way, the documentation is already very close to the real source of truth. I am just arguing that we should treat that format more seriously.
Something like this:
---
description: Images must have alternative text
severity: error
category: accessibility
source: eslint-plugin-jsx-a11y/alt-text
appliesTo:
- jsx
- tsx
---
# Images must have alternative text
## Why
Screen readers rely on alt text to describe images.
Without it, important content can become inaccessible.
## Rule
Every `<img>`, `<area>`, `<input type="image">`, and `<object>` element
must provide meaningful alternative text.
## Examples
### Correct
<img src="/team.jpg" alt="Our engineering team at the meetup" />
### Incorrect
<img src="/team.jpg" />
This format is boring in the best way. Humans can read it. AI can read it. Git can diff it. You can review it in a pull request without reverse-engineering a plugin.
The exact format does not matter. What matters is that the rule explains three things clearly:
- What the rule checks
- Why the rule exists
- What good and bad code look like
That is enough context for both people and machines.
Wrapping up
For years, lint tooling has mostly asked developers to learn the linter’s internal model. Plugin API, rule schema, AST utilities, config shape, lifecycle hooks, version compatibility, and so on. That made sense when the tool had to own the full workflow.
But after maintaining ESLint config and rules through these migrations, I do not think the current tradeoff is good enough anymore. ESLint 9 moved things in the right direction. I am now watching the ESLint 10 migration story take shape, and it still reminds me of the same underlying problem: we keep rebuilding the wrapper around the rule instead of protecting the rule itself.
When a rule has a real audience, real value, and clear intent, it should not become fragile just because the plugin lifecycle is hard, or because a package falls behind, or because one ecosystem moves faster than another.
AI systems are now genuinely good at understanding intent written in plain English. The right move is to push complexity down into the translation and execution layers, not keep it in the authoring experience.
I want a linter where the hard part is deciding what good code looks like, not remembering how to express that decision for one specific tool.