class: title # Code Review Best Practices ## Scott Rixner and Alan Cox ### Spring 2026 --- layout: true --- ## What Is Code Review? * A **systematic process** where engineers other than the author examine code before it merges * More than bug hunting — a technical conversation about **design, logic, and maintainability** * The primary mechanism by which organizations ensure **code health improves over time** * Used universally: Google, Meta, Mozilla, Linux — every serious software project > "No code reaches production without being reviewed by another engineer." > — Standard practice at Google, Amazon, Meta, and all major open-source projects --- ## Why Code Review Matters * **Defect prevention:** Catches bugs, edge cases, and design flaws before they ship * **Knowledge sharing:** Spreads understanding of the codebase across the team * **Mentorship:** Teaches developers patterns and best practices * **Collective ownership:** The team, not any individual, is responsible for the code * **Consistency:** Keeps the codebase coherent in style, architecture, and approach Studies show peer review catches roughly **twice** as many defects as individual testing alone. --- # Part 1 ## The Author's Perspective ### Writing code that reviewers will love --- ## Before You Send: Prepare Your Change * **Review your own diff first** — take a break after writing, then read it as a reviewer would * **Keep it narrowly scoped** — one logical change per patch; don't bundle unrelated fixes * **Separate functional and non-functional changes** — don't mix a refactor with a behavioral change * **Automate the easy stuff** — CI (Continuous Integration) checks, linters, and tests must all pass before requesting review > "If every reviewer comment is about formatting, your automated tools aren't doing their job." --- ## How Large Should a Change Be? * **Ideal:** Small enough for a reviewer to understand in one sitting * **Google's rule of thumb:** When changes exceed ~400 lines of production code, look to split * **Why small changes win:** * Easier to review thoroughly * Easier to roll back if something breaks * Complexity grows **exponentially** with lines touched, not linearly * **Sequence for mixed changes:** 1. Add tests first 2. Refactor while keeping tests constant 3. Then change behavior --- ## Writing a Great Patch Description A patch description is a **public record of the change** — future engineers will read it. * **What:** Summarize the major changes at a high level * **Why:** Explain the motivation — what problem does this solve? * **First line:** Short imperative summary (`Fix race condition in auth cache`) * **Body:** Context, trade-offs considered, links to issues ``` Fix race condition in auth token cache The token cache used a read lock during the TTL check but a write lock during invalidation. Switch to a single write lock for the check-and-invalidate operation. Fixes: #4421 ``` --- ## Handling Review Feedback * **Respond graciously** — treat feedback as discussion about the code, not your worth * **Answer questions with code, not comments** — if the reviewer is confused, refactor; don't just explain * **Communicate explicitly** — for every comment: fix it, explain why you won't, or ask a follow-up * **Minimize lag** — quick iteration cycles are better for everyone; don't let reviews go cold * **When in doubt, defer to the reviewer** — they may see things you've become blind to > A subtle flaw spotted by a reviewer is actually a **good sign** — it means the obvious issues are gone. --- ## The Author's Golden Rule **Value your reviewer's time.** Reviewers arrive each day with a finite supply of focus. Every minute they spend on formatting, context-hunting, or guessing your intent is a minute not spent on logic and design. * Write a clear, meaningful description * Keep changes small and focused * Do your own review before sending * Respond promptly and thoughtfully --- class: middle ## Discussion: Author Practices Which of these practices did you follow when you created a patch within the context of a group project? --- # Part 2 ## The Reviewer's Perspective ### How to review code effectively and constructively --- ## What to Look For: The Reviewer's Checklist * **Design** — Is this well-designed? Does it belong in the codebase or a library? * **Functionality** — Does it do what the author intended? Any edge cases or race conditions? * **Complexity** — Could this be simpler? Could another developer understand it quickly? * **Tests** — Are there correct, well-designed automated tests? * **Naming** — Are variables, methods, and classes named clearly? * **Comments** — Do comments explain *why*, not *what*? * **Style** — Does the code follow the team's style guide? * **Documentation** — Is relevant documentation updated? --- ## The Standard of Code Review The goal is **continuous improvement**, not perfection. > "There is no such thing as 'perfect' code — there is only *better* code." > — Google Engineering Practices Guide * Approve a patch once it **definitely improves** overall code health, even if imperfect * Don't block a good change because it could theoretically be even better * The reviewer has **shared ownership** of the code they approve --- ## How to Write Good Review Comments .twocolumn[ .col[ ### Bad *"Why did **you** use threads here when there's obviously no benefit from concurrency?"* * Personal ("you") * Asserts without explaining * No alternative offered ] .col[ ### Good *"The concurrency model adds complexity without any visible performance benefit. Single-threading would be simpler and safer here."* * About the code, not the person * Explains the reasoning * Suggests an alternative ] ] --- ## Response Time Speed matters — not just quality. * **Maximum:** One business day to respond to a review request * **Don't interrupt** focused coding for reviews — wait for a natural break point * **Quick acknowledgment** beats silence, even if the full review takes longer > "Most complaints about the code review process are actually resolved by making the process **faster**." > — Google Engineering Practices Guide Slow reviews decrease team velocity, discourage refactoring, and create pressure to approve substandard code. --- ## Navigating a Large Patch 1. **Take a broad view first** — read the description and understand the goal before diving in 2. **Find the main part** — the file with the most logical changes; review it first 3. **Send major design comments immediately** — don't finish everything before flagging a fundamental problem 4. **Look through the rest in sequence** — reading tests first helps establish intent 5. **Review every line** — don't skim human- or AI-written logic and assume it's fine > If *you* can't understand the code, other developers won't either — ask for clarification. --- ## Handling Pushback * **Don't back down from correctness under social pressure** — "I prefer it this way" is not a technical argument * **Do change your mind** when the author provides new information or a better argument * **If still stuck:** Escalate to a tech lead, module owner, or engineering manager | Signal | Response | | :--- | :--- | | Author disagrees, no new information | Politely hold your position | | Author provides a new technical argument | Reconsider genuinely | | Irresolvable stalemate | Escalate — don't let it sit | --- class: middle ## Discussion: Reviewer Practices Which of these practices did you follow when you reviewed a patch within the context of a group project? --- # Part 3 ## Proprietary- versus Open-Source Practices ### Google versus Mozilla --- ## Google Engineering Practices Guide **North-star principle:** Approve once the change *definitely improves* code health — even if imperfect. * **Tool-agnostic** — written for any review system; internally uses *Critique* * **Tactically deep** on mechanics: navigating large CLs (changelists), when to split, how to escalate * **"Small CLs"** is the longest page — treated as the highest-leverage practice in the guide * **Team velocity framing** — slow reviews hurt the whole team, not just the author * Covers **reviewer and author** perspective in equal depth --- ## Mozilla Code Review Guidelines **Core philosophy:** Approving a review makes you *the second person responsible* for the code. * **Formal gatekeeper model** — every patch needs sign-off from the **module owner** or designated peer * **Broader review scope** — explicitly checks security, localization, accessibility, performance, and licensing * Unique **"Necessary vs. Sufficient"** two-axis evaluation * Strong emphasis on **inclusive community culture** — tone, assumptions, welcoming contributors * Written for **Phabricator** (archived 2021); addresses staff engineers *and* volunteer contributors --- ## Mozilla: Review Dimensions and the Necessary/Sufficient Test * **Goal** — Does this fix the *actual* underlying problem? * **Security** — Input sanitization, fuzz-testing, static analysis * **Localization / Accessibility / Performance** — Explicit checklist items, not afterthoughts * **License** — Does new code follow Mozilla's licensing rules? **Two-axis evaluation:** * **Necessary:** Are *all* changes required? Unrelated changes → separate patch. * **Sufficient:** Does it fix the issue *completely*? What is **not** in the patch? --- # Part 4 ## Review Workflows ### How the process shapes the review --- ## Pull Request (PR) Workflow The default for most teams and open-source projects (GitHub, GitLab). * Developer pushes a **branch**, opens a PR; reviewers notified * Review in the **web UI** with inline diff comments * PR states: open → changes requested → approved → merged * CI runs automatically on every push * Low friction for **external contributors** — fork and submit **Limitations:** * Large or stacked changes are **awkward** to manage * All discussion and history is **siloed on the hosting platform** --- ## Dedicated Code Review Tools **Phabricator** (Facebook, 2011 — archived 2021) * Integrated code review (*Differential*), task tracking, and repo browsing * Example: https://reviews.freebsd.org/D55525 * Used by Mozilla, Wikimedia, Dropbox, and FreeBSD **Strengths:** * Review history lived **inside the tool**; stacked diffs were first-class * Unified task, review, and repo browsing in one interface **Modern replacements:** Gerrit (Android, ChromiumOS), Review Board (various enterprises) --- ## Email-Based Workflow The Linux kernel uses **mailing lists and plain-text email** — no web UI at all. * Patches posted to the mailing list; developers reply with inline comments * No hosting platform, no graphical diff — everything is `git send-email` **Metadata tracked via patch email tags:** | Tag | Meaning | | :--- | :--- | | `Signed-off-by:` | Author certifies right to submit the patch | | `Acked-by:` | Another developer agrees with the approach | | `Tested-by:` | Confirmed working by the tagger | | `Reviewed-by:` | Reviewed for correctness | --- ## Email-Based Review Philosophy **Review is a property of the patch, not the platform.** * Review history and approval trail are **embedded in commit messages** and mailing list archives * Truly portable and decentralized — no dependency on any hosting service * Patches are almost **never accepted on first submission** — iteration is expected and normal * Ignoring reviewer comments is a reliable way to get ignored in return This contrasts with PR-based workflows, where review metadata is **siloed on the hosting platform** and disappears if you migrate services. --- class: middle ## Discussion: Workflow Trade-offs What are the trade-offs between these different workflows? --- # Part 5 ## Key Takeaways ### What every good code review process has in common --- ## Universal Principles Across Google, Mozilla, Linux, and every effective engineering team: * **Small, focused changes** are universally preferred — easier to review and easier to revert * **Response time matters** — most code review complaints resolve when reviews get faster * **Comment on code, not people** — this appears in virtually every guide * **The reviewer shares responsibility** for approved code — accountability on both sides * **Automated checks are not optional** — CI must pass before human review begins * **Iteration is normal** — patches rarely land on the first submission --- ## The One Thing Each Guide Gets Right * **Google:** The *velocity* framing. Code review speed is a team concern. Every hour a PR sits waiting costs the whole team, not just the author. * **Mozilla:** The *breadth* of review. Explicitly checking security, accessibility, localization, and licensing elevates review from a code quality gate to a product quality gate. * **Linux Kernel:** The *portability* of review history. Embedding review evidence in commits and mailing list archives means the record survives any platform change. --- ## Setting Up a Strong Code Review Culture 1. **Define your standard** — write down what "approved" means 2. **Automate the easy stuff** — formatting and basic correctness belong in CI 3. **Set a response SLA** — one business day is the industry standard 4. **Keep PRs small** — split large changes before requesting review 5. **Require meaningful descriptions** — "fix bug" is not acceptable 6. **Celebrate good reviews** — reviewing is as valuable as writing code 7. **Iterate on your process** — investigate when reviews are slow or contentious --- class: middle ## Discussion: AI and Code Review How do you think AI will impact the code review process?