Save Time With A Smart Contract Audit Checklist
Why Audits Slip
If you’ve ever scheduled a smart-contract audit, you know the sinking feeling: the code is “ready,” the auditors are booked, and yet the process drags on. What you thought would take three weeks stretches into six or eight. The cost balloons, your launch slips, and investor updates get awkward.
Why does this happen?
The answer isn’t usually that auditors are slow or that the code is riddled with bugs. It’s that teams walk into audits unprepared. They hand auditors a repo with half-written tests, no clear documentation, and no articulation of what “secure” even means for their protocol. Auditors then spend valuable hours reverse-engineering intent instead of evaluating risk.
Audit firms like ConsenSys Diligence, Trail of Bits, and Halborn have all emphasized that audit prep can save weeks of back-and-forth. Hacken estimates that missing documentation alone can add 25–30% to audit duration.
The reality: a smart-contract audit is not the place to figure out your threat model, invariants, or upgrade plan. Those need to be defined before the audit begins—what may be called pre-flight phase.
In this guide, we’ll cover six deliverables that consistently cut audit timelines by 1–2 weeks, while dramatically improving security posture:
Threat model document
Invariants list
Property-based tests
Privileged operations registry
Upgrade & rollback rehearsal
Documentation package
Each section includes not just what to do, but why it matters, how to do it, and a simple example. By the end, you’ll have a checklist your team can use before ever emailing an auditor.
1. Threat Model Document
Most teams skip this step, since they assume the auditor will “find the threats.” But that’s like asking a doctor to diagnose you without describing any symptoms.
A threat model is a structured way to think about who your adversaries are, what they want, and how your system might fail. It doesn’t need to be a 40-page formal report, but it should at least cover:
Actors: Who interacts with your contracts? (users, admins, bots, oracles)
Assets: What’s valuable? (tokens, liquidity pools, governance power)
Attack vectors: What could go wrong? (reentrancy, flash loan manipulation, price oracle tampering, admin key misuse)
Mitigations: How have you attempted to address these risks?
Example (Simplified)
Actor: Liquidity provider
Asset: Pooled USDC
Threat: Reentrancy attack on `withdraw()`
Mitigation: `nonReentrant` modifier on function; transfer happens last
By writing this down, you align auditors with your assumptions. If they see a gap (maybe you missed sandwich attacks) they’ll flag it faster.
Pro tip: Include your business model in the threat model. If your protocol earns fees, auditors should know how those are calculated and what risks might break them.
Trail of Bits highlights threat modeling as “the most effective way to focus audits on the risks that matter most”.
2. Invariants List
If a threat model describes what could go wrong, invariants describe what must always be true.
An invariant is a property of your contract that should hold under all conditions. Think of it as a mathematical truth about your system. For example:
The sum of all balances must equal total supply.
A user’s collateral ratio must always remain ≥ 1.1x.
Governance proposals cannot execute before a minimum delay.
Why it matters: auditors spend a lot of time trying to deduce what your invariants are. If you provide them, they can spend time testing violations instead of guessing intent.
Example (Solidity)
// Invariant: totalSupply equals sum of all balances
assert(totalSupply == balanceOf[user1] + balanceOf[user2] + ...);
Example (DeFi)
In MakerDAO’s contracts, a critical invariant is that the system must remain overcollateralized. If collateral < debt, the system is insolvent. Encoded as an invariant, this becomes testable.
Formal methods researchers have shown that explicitly stating invariants reduces both audit time and post-deployment exploits.
3. Property-Based Tests
Most dev teams stop at unit tests: “does deposit() work with 100 tokens?” That’s good, but it only covers the scenarios you imagine.
Property-based testing goes further. Instead of fixed inputs, it generates random ones within constraints, then checks whether invariants still hold.
Why it matters: Auditors love to see property-based tests because they reveal edge cases you wouldn’t think of, like extreme integer values, rapid sequences of calls, or adversarial patterns.
Example (Using Echidna Pseudocode)
property totalSupplyMatchesBalances():
random user = fuzzAddress()
random amount = fuzzUInt()
assume(amount <= totalSupply)
pre = balances[user]
token.transfer(user, amount)
assert(totalSupply == sumOf(allBalances))
Tools like Echidna (Trail of Bits) and Foundry’s invariant testing make this accessible. In fact, Echidna has uncovered real bugs even in well-audited projects.
Pro tip: Run these tests before code freeze. Auditors will often run their own fuzzers, but if you hand them results upfront, you’ve already de-risked weeks of exploratory testing.
4. Privileged Operations Registry
Every system has privileged functions: pausing transfers, minting tokens, adjusting fees. The problem is that these are often buried, undocumented, or inconsistently gated.
An operations registry lists every admin-level function, who can call it, under what conditions, and why it exists.
Example Table
Function | Role | Conditions | Justification |
| Admin multisig | Only callable when risk flag set | Freeze protocol during exploit |
| Governance | Passed by 51% quorum | Mint tokens for rewards |
| Admin multisig | 3-of-5 signers | Risk management lever |
Why it matters: Without this, auditors waste time discovering “hidden levers” and debating whether they’re intentional. With it, they can evaluate whether access control matches design.
A 2023 Chainalysis report noted that misuse of privileged roles accounted for over $1B in losses across DeFi protocols.
5. Upgrade & Rollback Rehearsal
Smart contracts are famously “immutable”, but in practice, most protocols use proxy patterns to enable upgrades. The question isn’t whether you can upgrade, but whether you can do it safely.
The fix: rehearse both upgrades and rollbacks in staging.
Example:
# Deploy upgrade
npx hardhat run scripts/upgrade.js --network staging
# Rollback to previous impl
npx hardhat run scripts/rollback.js --network staging