Announcing Bito’s free open-source sponsorship program. Apply now

Let AI lead your code reviews

Rust Code Review: Best Practices, Tools, and Checklist

Rust Code Review: Best Practices, Tools, and Checklist

Table of Contents

Rust makes you feel smart. Code compiles, borrow checker’s happy, no red squiggles in sight. Then someone merges a change that technically compiles—but introduces a silent bug, an unnecessary clone(), or worse: a data race in unsafe code.

If you think Rust’s compiler replaces reviews, you’re already setting your team up for tech debt.

This isn’t a checklist post. This is a field guide for reviewing Rust code like someone who’s been burned before.

The compiler is a safety net, not a code reviewer

Let’s be clear: the Rust compiler does a phenomenal job catching things that would require whole teams of static analysis engineers in other languages. But it only catches what it’s designed to catch. And that’s mostly syntactic and memory safety violations, not business logic, API design, or maintainability problems.

A reviewer’s job in Rust is to catch the things the compiler can’t see:

  • Is this use of Arc<Mutex<T>> hiding unnecessary shared state?
  • Are we cloning data just to satisfy the borrow checker?
  • Did someone write unsafe code without documenting the contract it assumes?

One of the worst bugs I saw last year was caused by an unsafe block inside a hot code path that assumed an FFI buffer was aligned—except on Windows, it wasn’t. The compiler didn’t care. Reviewers missed it. Took two weeks to debug the resulting memory corruption.

Things I always look for in a Rust review

Every reviewer develops a mental model. Mine centers around five areas: ownership semantics, abstraction clarity, unsafe code, performance cost, and idiomatic usage.

Here’s how I think about them:

1. Ownership and lifetimes are the first class citizens

I never assume ownership semantics are correct just because the code compiles. If a reviewer just sees clone() and moves on, they’re missing the problem. clone() is the duct tape of Rust. Sometimes it’s necessary. Most of the time it’s a sign someone gave up on borrow checking.

I ask:

  • Why is this data being cloned?
  • Can this borrow be rewritten?
  • Could we use Cow here?
  • Does this lifetime annotation still make sense after the last refactor?

2. Unsafe code needs a written contract

If there’s one rule I enforce consistently: unsafe code should not exist without an inline comment explaining its safety contract.

Not what the code does. Not what it intends to do. But why it’s safe, what invariants are assumed, and under what conditions it would break.

I’ve seen code like this way too often:

unsafe {
    *ptr.add(offset) = value;
}

No comment. No check. The implicit contract is “I hope offset is valid.” That’s unacceptable in a review.

Whenever I see unsafe, I pause and ask: is this the only way? If yes, document why this is safe. Period.

3. Traits and abstractions should earn their place

Rust makes it very easy to create traits. Too easy. But every trait adds complexity. If the trait isn’t needed for generic use, async polymorphism, or test mocking, I push back.

I saw a codebase last year with a trait that had a single implementor. That trait existed purely to allow dependency injection, but it ended up hiding real implementation bugs behind indirection.

You don’t need a trait for everything. Rust isn’t Java. Simpler is better.

4. Match what the compiler doesn’t see: performance by design

Clippy won’t warn you about the performance cost of holding a lock too long or calling .collect::<Vec<_>>() on a 100,000 element iterator when a streaming approach would do.

If a piece of code is in a hot path, I ask:

  • Are we cloning inside a loop?
  • Is this Mutex lock held longer than necessary?
  • Are we converting back and forth between types just to appease the type system?

The worst performance regressions I’ve reviewed all passed tests, passed CI, passed Clippy—and then killed our latency SLAs in production.

How I actually conduct reviews in real teams

Every team has a different culture, but here’s the workflow that’s worked for me across multiple orgs:

Start with the why, not the what

Before reading a single line, I look at the pull request description. If there’s no context—why this change exists—I don’t even begin. Code doesn’t exist in a vacuum. Knowing whether we’re optimizing for performance, maintainability, or correctness changes what you look for.

Review in layers

  1. High-level design: Does this module need to exist? Should this be a trait or a function?
  2. Correctness: Do lifetimes align? Is ownership sound? Are all Result paths handled?
  3. Idioms and readability: Could this be a match instead of if let? Is this name meaningful?
  4. Performance: Any unnecessary heap allocations? Clone abuse? Blocking async code?

Don’t nitpick what Clippy can catch

Rust gives us clippy and rustfmt. I treat them as mandatory. I don’t comment on spacing, redundant clones, or shadowed variables—because Clippy can do that better than I can.

Reviews should be about intent and correctness, not formatting.

Why AI review tools like Bito actually work in Rust

I was skeptical about AI reviews. Most tools I’ve tried treat Rust like just another language—they get tripped up by lifetimes, miss unsafe nuances, and ignore the actual semantics of ownership.

Bito’s AI Code Review Agent surprised me.

It caught subtle issues that even senior reviewers missed. In one case, it flagged a pattern where a reference was stored in a struct without lifetimes being enforced properly. The compiler allowed it (due to 'static being inferred incorrectly), but it was a memory leak waiting to happen. The AI spotted it, and we fixed it before release.

What Bito does well:

  • Understands the structure of lifetimes and borrows
  • Surfaces unsafe blocks with context, not just location
  • Identifies non-idiomatic but legal patterns (like .unwrap() in error handlers)

No AI tool replaces human judgment. But Bito’s value is in consistency—it always checks everything, even when reviewers get tired. That’s where it shines.

Reviewing Rust is a skill, not a checkbox

A good Rust code review takes time. You’re not just checking types—you’re validating contracts, uncovering assumptions, and protecting future maintainers from the hidden costs of abstraction.

I’ve reviewed PRs where one innocent-looking change introduced a performance bottleneck due to a misused Arc, and another where an unnecessary trait made error propagation painful across five modules. Both compiled fine. Neither had tests that would’ve caught the issues.

That’s why we review Rust code like professionals. Not because we don’t trust the compiler—but because we do trust our systems to do important work, and that trust is earned line-by-line.

Picture of Sarang Sharma

Sarang Sharma

Sarang Sharma is Software Engineer at Bito with a robust background in distributed systems, chatbots, large language models (LLMs), and SaaS technologies. With over six years of experience, Sarang has demonstrated expertise as a lead software engineer and backend engineer, primarily focusing on software infrastructure and design. Before joining Bito, he significantly contributed to Engati, where he played a pivotal role in enhancing and developing advanced software solutions. His career began with foundational experiences as an intern, including a notable project at the Indian Institute of Technology, Delhi, to develop an assistive website for the visually challenged.

Picture of Amar Goel

Amar Goel

Amar is the Co-founder and CEO of Bito. With a background in software engineering and economics, Amar is a serial entrepreneur and has founded multiple companies including the publicly traded PubMatic and Komli Media.

Written by developers for developers

This article was handcrafted with heart emoji by the Bito team.

Latest posts

Rust Code Review: Best Practices, Tools, and Checklist

PHP Code Review: Best Practices, Tools, and Checklist

Comparing Agentic AI Code Reviews with Linear Reviews

Kotlin Code Review: Best Practices, Tools, and Checklist

PEER REVIEW: Gaurav Nigam, VP of Engineering at WorkBoard

Top posts

Rust Code Review: Best Practices, Tools, and Checklist

PHP Code Review: Best Practices, Tools, and Checklist

Comparing Agentic AI Code Reviews with Linear Reviews

Kotlin Code Review: Best Practices, Tools, and Checklist

PEER REVIEW: Gaurav Nigam, VP of Engineering at WorkBoard

From the blog

The latest industry news, interviews, technologies, and resources.