The big weakness of Static Analysis is, that it can throw needless warnings, even if our software is “perfect”. In this article, we learn why Static Analysis must do that, why it can be wiser to change your code instead of managing warnings, and some tricks to keep your sanity in the process.
I often talk to users of Static Analysis who have a surprisingly high confidence in saying “the tool is wrong, there is no bug”. This is known as a False Positive. What happens next is not only irony, but also risky and costly: These users often suppress the warnings to silence the tool.
They might be right that time, but eventually and inevitably, they will be wrong and suppress warnings of real bugs, which then backfires. Moreover, repeated suppression of warnings creates a lot of extra work, but more on that later.
There is a better way, with a strong recommendation from the famous software engineer Bruce Lee. But before we go there, let’s start with a good old analogy, to make sure we really understand what False Positives are.
An Analogy – Your domestic fire detector
If you read this, chances are quite high that you have one of those devices sticking to your room ceilings:

A smoke or fire detector emits an unpleasantly loud alarm whenever it thinks there is a fire or toxic smoke. This little thing is there for your safety, to avoid that you miss a fire and get injured or worse. A bit like Static Analysis.
What is a False Positive, really?
A False Positive is when Static Analysis reports something as an warning, but it isn’t actually a real problem. This is also called a “false alarm”, or colloquially “noise”. Example with the fire detector: You are preparing your morning sandwich in a toaster, but it gets a little too toasty and the fire alarm goes off. There is no actual fire, and no real problem. Apart from an empty stomach perhaps.

The other way around, there is also the concept of a False Negative. This is when there is a real bug (smoke from a fire) but it isn’t reported. Some people (like me), believe that is even worse.
In total, there are 4 possible scenarios:
| There really is a problem | There isn’t any problem | |
|---|---|---|
| Static Analysis reports warning | True Positive (this article!) | False Positive (this article!) |
| Static Analysis reports nothing | False Negative | True Negative |
Now, the fun begins. It is actually not so easy to agree what counts as a False answer.
In our fire alarm analogy, some people might say “the detector is bad, I am just having toast”, others might say “it does exactly what it is supposed to do”.
And it’s similar with Static Analysis. Let’s say it warns about a null pointer dereference that is impossible in practice due to prior checks. Then this conversation could happen:
- Typical Developer: There is no null pointer, because this is being checked before. This is a False Positive.
- Typical Static Analysis guy: The code which does the check is not provided to Static Analysis. Without that, it is plausible that there could be a null pointer. This is a True Positive.
Both have a point here. But the latter interpretation is more useful, because it tells us what we could do to get rid of the warning (add more code, also called context). Thus, let’s agree on the following definition:
A False Positive is a warning for a bug that is not actually possible in the program, given the analysis context.
Telling True from False Positives is tricky and risky
Now that we agree what a False Positive is, I have bad news: In practice, we rarely know if it is False, because we don’t know for sure if there really is a bug.
Again, comparison with the fire detector: When it goes off, we can’t be 100% sure that there is no fire. Maybe it was just our toasty toast, or maybe there is a real fire and we just don’t see it yet. At least, it is worth reviewing the situation before you grab the broom and suppress the alarm.
Unfortunately, such a review has to be based on human judgment, since Static Analysis (and fire alarms) itself cannot help you to identify the False Positives (otherwise, why would it have shown them in the first place…?). Human review could consider:
- Missing context: Does the event depend on external inputs? Do we know more than just this code? (E.g., I can see that the toast is smoking)
- Tool limitations: Some tools can’t handle dynamic constructs, reflection, or complex conditions (e.g., toast smells like smoke, the cheap sensor cannot distinguish. Get a better one?)
- History: If you’ve seen similar False Positives before, you may recognize the pattern (e.g., every morning your toast causes a fire alarm. Do you have neighbors, by the way?)
- External analysis: Cross-verifying with dynamic analysis, code reviews, or other tools can help confirm (e.g., get a second fire detector, or ask your partner to look for a potential fire together)
However, humans are very successful at making mistakes, as we have discussed before. Therefore, wrong decisions will happen and eventually bugs (fire) will be missed. This also includes me, who, after 15 years with Static Code Analysis, still occasionally finds that the tool was more clever than me.
Mistaking a True Positive for a False one, means that we miss a real bug and have a real risk.
Why don’t we just build better Static Analysis tools?
The answer is simple: Building a Static Analysis tool with zero False Positives is impossible.
I spare us the theory, but it is important to know that there is a mathematical proof that states that properties of programs are generally undecidable. In other words, it is impossible to predict each and every control- and data flow in all programs precisely. Therefore, some verification questions must remain unanswered.
(If you can prove the opposite, please don’t hesitate to publish a paper and block your calendar for receiving the Nobel Price of Computer Science. I personally believe that applies to fire detectors, too, and I call this Martin’s Toasty Axiom)
Now, Static Analysis has to do something with this unavoidable uncertainty:
- Option 1 (No False Negatives): If you want a Static Analysis that misses no bug, every uncertain case must become a warning. That means we will get False Positives, i.e., we get noise.
- Option 2 (No False Positives): If you want a Static Analysis that has no noise, every uncertain case is hidden from the user. That means we will get False Negatives, i.e., we miss bugs.
In reality, most tools are somewhere in between. The trick is to balance between the two extremes, to get a high precision. And it has to be that way – just imagine how useless would be a tool that never reports any bug on any code. It would have no False Positives, but that’s not what we wanted either…
If anyone tries to sell you a Static Analysis tool with zero False Positives, it means that tool misses bugs. On the other hand, if anyone tries to sell you a tool that does not miss any bug, it means that tool must have False Positives.
Everything else is – according to math – just evil marketing.
For the rest of this article, I will no longer distinguish between True and False Positives – because, as explained, we typically can’t be sure which is which. Let’s generously call it “Tool Warnings”.
Be Water, My Friend
Finally we get to the senior developer Bruce Lee. Instead of living with the uncertainty and risk of misclassified warnings, and putting up resistance against the tool (review these warnings, suppress them, perhaps repeatedly…), why not adapt your code until the tool is silent?

This may sound a bit strange, but it can be a more sustainable strategy, and I have seen very successful teams working that way. They don’t expect the tool to be perfect, and they don’t fight it either. They go the path of least resistance, which often means changing coding style until the tool is silent:
- That way, you don’t miss bugs by accidentally suppressing True Positives.
- That way, you don’t have to think much with your flawed human processor.
- That way, you can automate the verification checks more easily.
- That way, you don’t have to periodically justify your suppressions in code reviews (especially when you develop for MISRA compliance).
It can be more effective, and more safe.
In terms of fire detectors: Maybe it’s not a brilliant idea to make black toast every morning? Or, if you insist, use a cooker hood to keep your neighbors happy, perhaps?
A blueprint to handle tool warnings
In practice, here is how to take the path of least resistance, while minimizing slipped bugs. For each tool warning, consider the following steps:
- Change your code: Regardless if the warning seems true or false, can you change the code to make it go away, and would it be a quick change? If yes, don’t think too much and do it right now. It also improves maintainability. And just in the odd case that the tool is right, you have prevented a bug. If you don’t want or can’t change the code, continue with the next step.
- Understand thoroughly: Is it a real warning? Tip: Many tools can take assumptions as inputs, which can be used to poke around and investigate what the tool really knows. I love putting assertions in the code, to check what the tool knows.
- If it looks like a real bug, stop here. Either fix the code or tell the owner to fix it. Use a bug tracker, but don’t suppress the warning…
- If it looks like a False Positive, proceed with the next step. Bonus points for reporting this to the tool maker, especially if you find a pattern that occurs often.
- Tune the tool: Before you suppress, try this. Unlike the fire detector, you can configure most static analysis tools! Can you improve the tool settings or add analysis context that may make the warnings go away? More often than not, this resolves multiple warnings at once. If not, continue with the next step.
- Suppress the warning: Document the following points (if your code is just a hobby project, you may be a bit more relaxed), and then suppress the warning:
- Why do I think it is a False Positive?
- What is the worst impact if that warning is true? Can I live with it?
- What is the future effort of maintaining this deviation? Will my judgement also be correct when the code or context changes slightly?
- Did other methods like Dynamic Analysis yield different results?
- Let somebody review your decision.
- (Sanity check: If you ended up justifying a mandatory MISRA guideline, go back to 1. Something went wrong.)
And, needless to say, just like the fire detector you shouldn’t simply turn it off globally. In some cases, it might be worth putting it in another room, i.e., excluding some sources from analysis.
But wait, I have too many warnings to follow that!
Okay, so for some reason you have a long list of warnings. Don’t panic, that’s the nature of things. In this case, it is indeed not a good strategy to go over all warnings, because some unfavorable psychological effects will set in:
- Developer fatigue: People start ignoring all warnings, even when they change the code.
- Tool devaluation: Teams stop trusting or using the tool.
There are, really, only three possible methods to handle that:
- Prioritize the warnings that you see. Not all warnings are equal:
- Warnings related to integrity should be reviewed first (e.g., array-out-of-bounds access, DIV/0, …)
- Ask the tool maker if the warnings have different confidence levels – sometimes they do
- Apply baseline filtering: Analyze your software once, and save the results. This is your baseline. Compare any new analysis result to this, and focus on the new warnings only. Over the next months, make time to regularly work on the baseline itself.
- Check your tool configuration and provide context (see above, Static Analysis is not as rigid as a fire alarm).
These are proven recipes to avoid being overwhelmed by legacy code, and still get benefits from Static Analysis. Alternatively, try this (not recommended by me):

Takeaway: The perfect fire detector does not exist
Let’s say you adapt your code more to the tool, will it be magic? No. You might fix too much. You will be annoyed occasionally. But at least you won’t see the same warnings again next week, you have truly eliminated risks, and you will not have to re-evaluate the suppressions when your code changes. That’s a silver lining, which is perhaps enough reason to change your style.
Most importantly, Static Analysis is your friend. And just like your friend, you should not expect it to be 100% perfect.
What’s next?
We will discuss Defensive Coding.