Not all MISRA rules are equal. Some can be checked perfectly by any tool, others are fundamentally impossible to verify with certainty. Knowing the difference saves you from arguing with your tool and tells you when your compliance is at risk, or when you are making a fool of yourself.
After two articles (part 1, part 2) on MISRA guidelines, you now know the categories (Mandatory, Required, Advisory), the compliance process, and how to reduce your warning count by 80%. But there is one more insight that changed how I think about guideline violations entirely.
It started when I noticed something on a customer project: Two different Static Analysis tools were analyzing the same C codebase, and one flagged 180 warnings, the other 340. Same code, same guidelines, but strikingly different results. My first instinct was to blame the cheaper tool (a classic reflex, I know). But the real answer turns out to be one of the more useful things I have learned about working with these tools.
Not all rules can be checked perfectly
Some MISRA rules have a yes-or-no answer that any tool can compute directly from the source code. Others depend on information that simply is not available at analysis time. This distinction has a name: decidability. If that sounds like computer science theory — it is. But bear with me, because it is also genuinely useful.
- A rule is decidable if a tool can always give a correct answer. No guessing, no approximation. Think of a passport check at border control. Is this passport expired? Check the date! Any officer, any country, same answer. Binary, unambiguous, no judgment required.
- A rule is undecidable if no tool can guarantee a correct answer for all programs. Think of a border security risk assessment. Is this traveler a threat? You can look for warning signs, check databases, apply screening criteria. But a definitive answer? No system can give you that, because the truth only reveals itself at runtime. You can always get False Positives or False Negatives.
For example, take these two MISRA rules:
- Rule 5.1 (external identifiers must be unique) is decidable. The tool reads the variable and function names, checks for clashes, done. Any tool, same answer.
- Rule 1.3 (no undefined behavior) is undecidable. Undefined behavior depends on data values, execution paths, and hardware behavior. Things that only exist at runtime. The tool has to do a lot of work to verify this rule.
How do we know that a rule is (un)decidable? My rule of thumb: What you can see from the syntax is usually decidable. What requires value and flow analysis, is undecidable.
If you are arguing that a violation of a decidable rule is a False Positive, you are probably wrong.
Where MISRA rules fall on the spectrum
Now here is where it gets practical. For the MISRA coding guidelines, we can have a look at the guideline document to see which rules are dedicable. The outcome is surprising.
MISRA C:2023 has 200 rules, and as discussed before, they fall into three categories:
| Category | Total | Decidable | % |
|---|---|---|---|
| Mandatory | 23 | 7 | ~30% |
| Required | 137 | 106 | ~77% |
| Advisory | 40 | 36 | ~90% |
Two things jump out immediately:
- Advisory rules are almost entirely decidable. Most of them can be checked with certainty. That means when your tool fires an advisory warning, it is almost always a real violation, not a False Positive. Decidability tells you the violation exists.
- Mandatory rules are surprisingly difficult. Only 7 of 23 mandatory rules are decidable. The rest, including Rule 1.3 (“no undefined behavior”), depend on runtime properties that no static tool can fully predict.
This gives us an interesting insight: For the rules that matter most, the undecidability is highest. That is an argument for choosing your tool on precision, not just price. A tool that approximates Rule 1.3 more carefully will find more real bugs — and produce fewer false alarms on the rules you cannot afford to get wrong.
Advisory warnings are your most reliable signal. 90% of them are decidable.
What this means for Deviation Handling
Here is the practical consequence I wish someone had told me earlier: decidability tells you which responses to a warning are even possible. The table makes it concrete:
| Mandatory | Required | Advisory | |
|---|---|---|---|
| Decidable | Fix the code. No exceptions. “False Positive” is off the table. | Fix the code, or write a proper deviation record. “False Positive” is not a valid reason. | Deviate, fix, or skip, but don’t call it a False Positive. |
| Undecidable | Investigate: if confirmed FP, document and suppress. If real: fix the code. | Investigate: if confirmed FP, document and suppress. No deviation record needed. If real: fix or deviate. | Investigate: if FP, dismiss. If real: your call. |
The hardest case is mandatory + undecidable. You cannot deviate from a mandatory rule, but the tool might still be wrong. If you can’t fix your code, then your only path is to demonstrate it is a False Positive and document that argument carefully.
The most common case is required + decidable. 106 of 200 rules fall here. You cannot blame your tool for creating extra work. Your choice is to either fix the code or write a deviation record. And as discussed in part 1, fixing is almost always cheaper long-term.
For all undecidable rules, Formal Methods can help to identify False Positives. If they can prove the absence of an error, you can skip the code change and the deviation record. That alone can save weeks of deviation paperwork.
The most common case is a violation of a required + decidable rule. You cannot blame your tool, and you cannot ignore it either. Fix or deviate with a proper reason.
Pro tip: If you frequently find yourself justifying a violation of an advisory rule with the comment “False Positive”, you are telling your colleagues that you have no clue what your are doing, for two reasons: First, it’s probably not a False Positive, since most advisory rules are decidable. Second, you are not forced to address each advisory violation anyway, see part 2.
Multiple tools, supply chains, and open source
Back to that customer project with two tools and two very different warning counts.
In the real world, multiple Static Analysis tools often operate on the same codebase. This is common in:
- Automotive supply chains: an OEM may mandate one tool, a Tier 1 supplier uses another, a Tier 2 component vendor uses a third.
- Open source projects: the Zephyr RTOS, for example, has many contributors, and they all have their favorite analysis tools.
- Safety-critical certification: some standards recommend independent tools as a cross-check.
When tools disagree, undecidability mostly explains why (and sometimes also the interpretation of the guideline, or simply bugs). Think back to passport control: every border system in every country will flag the same expired passport. The vendor doesn’t matter, since the question is decidable. Decidable MISRA rules work the same way. If tool A and tool B both check Rule 5.1, they will find the same violations. If they don’t, one of them has a bug or is configured differently.
Undecidable rules produce inconsistent results because each tool is making different approximations with different precision. One tool might track data flows more aggressively and flag more possible undefined behavior. Another might be more conservative. Neither is wrong, exactly. They are making different tradeoffs in the face of theoretical impossibility.
This gives you a practical strategy for multi-tool workflows:
When using multiple Static Analysis tools on your code, start with resolving the violations of decidable rules. This appeases all tools at once.
In a supply chain context, this is also useful for conversations between parties. If your customer questions a warning from your tool, check whether the rule is decidable. If it is, the discussion is over. If the rule is undecidable and two tools flag it differently, you have a legitimate technical conversation: which approximation fits your shared context? That is a much better starting point than two teams arguing past each other for weeks.
The takeaway
I spent years arguing with tools before I understood this distinction. It would have saved me a lot of embarrassing conversations. One question changes everything: “Is this rule decidable?”. It tells you whether to argue, investigate, or just fix the code.
That question settles things fast — with your team, your supplier, your customer, and your auditor. And it helps to prevent that coding guidelines become a numbing checkbox exercise.
If you work across multiple tools or in a supply chain, decidable rules are your shared baseline. And if you manage developers, decidable violations sitting unresolved are a documented quality risk. Now you have the vocabulary to say exactly why.
What’s next?
Three articles on MISRA is probably enough for now. Next time, we zoom out and look at something bigger.
