Get production-ready code in Cursor and Claude with Bito’s AI Architect

AI Architect tops SWE-Bench Pro

The Redis Key an LLM Got Wrong and AI Architect Got Right

The Redis key an LLM got wrong and AI Architect got right

Table of Contents

Redis cache keys in a multi-repo architecture are rarely simple strings. They get assembled across shared libraries, transformed through multiple abstraction layers, and the function that produces the final key often lives in a completely different repository from the service that uses it. Getting the key format wrong means silent cache misses, every request falls through to the database while monitoring shows the cache is healthy. 

A developer working in a 450-repo Go codebase asked for the Redis key format for restaurant ratings. An LLM explored the application service and returned an answer from the wrong abstraction layer. AI Architect, using its context engine and the Codebase Explorer Skill, a methodology for tracing values across repositories to their storage boundary, traced the correct key across 2 repos and 4 abstraction layers and verified every segment from source code. 

Without AI Architect: the LLM returned RESTAURANT#<restaurantId>#<algoName>. Wrong delimiter, missing prefix, stopped at the DynamoDB partition key layer. 

With AI Architectthe LLM returned rnr~pk_id~RESTAURANT#<restaurantId>~sk_id~<algoName>. Every segment traced from source code and verified. 

The scenario 

A leading restaurant information service processes millions of rating and review events daily. Their Ratings & Reviews (rnr) microservice is a Go codebase that stores aggregated restaurant ratings in DynamoDB with a Redis cache layer in front. 

The architecture spans approximately 450 repositories. Shared data-access libraries handle the translation between application-level keys and storage-level keys, meaning the logic that formats a Redis key lives in a completely different repository from the service that uses it. 

A developer asks: “What is the Redis key format for restaurant ratings in our rnr service?” 

Answering it correctly requires tracing a value across two repositories, through four abstraction layers, and distinguishing between three separate cache technologies. A straightforward question with a surprisingly deep answer. 

Why this matters 

This is not an edge case. Any real production codebase has the same characteristics that cause an LLM to reason from incomplete context and get the answer wrong: 

  • Shared libraries and DALs that transform values between the application layer and storage. The critical formatting logic lives in a different repository from the one the developer is asking about. 
  • Multiple cache layers with different technologies behind similar-looking interfaces. A type called RedisDDB could wrap either technology. Only reading the function body reveals the truth. 
  • Cross-repository dependencies where the key transformation happens in code the LLM has never seen and cannot access without indexed search. 
  • Naming that misleads. A # character in a partition key value looks like it could be the cache key delimiter, but a ~ tilde is used at the actual storage boundary. 

Without the AI Architect context engine and structured skills, an LLM finds a plausible answer at the wrong layer and presents it with full confidence. With them, the LLM follows the same rigorous trace a senior engineer would, across repositories, through abstraction layers, down to the actual network call, and produces an answer that is verifiable and correct. 

The following table summarizes the full gap between the two approaches, each row maps to a specific step in the trace covered later in this post. 

 LLM without AI Architect LLM with AI Architect + Skills 
Answer RESTAURANT#<restaurantId>#<algoName> rnr~pk_id~RESTAURANT#<restaurantId>~sk_id~<algoName> 
Repos read 1 (application service only) 2 (application service + shared DAL library) 
Layers traced 1 (partition key construction) 4 (handler, wrapper, CachedSession, Redis pipeline) 
Client verified No, assumed from type name Yes, read function body, confirmed Redis pipeline call 
Adjacent results excluded No, in-process cache key confused with Redis key Yes, 3 separate cache layers explicitly ruled out 
Verification gate None The One Test: 3 mandatory lines completed from read code 

What an LLM gets wrong 

When a developer asks an LLM about Redis keys in this codebase, the LLM encounters several plausible-looking patterns and picks the wrong one. 

// internal/db/ddb/reviews.go:396 
func getPartitionKeyForAggregatedRatings(entity *rnrv1.Entity) string { 
  return db.GetNameFromEntityType(entity.GetEntityType()) 
      + HashDelimiter + entity.GetEntityId() 
} 

It resolves the components: 

  • GetNameFromEntityType(ENTITY_TYPE_RESTAURANT) returns “RESTAURANT” 
  • HashDelimiter = “#” 
  • entity.GetEntityId() = the restaurant ID 

The LLM then confidently reports the Redis key as RESTAURANT#<restaurantId>#<algoName>. 

This is wrong. The # character is the DynamoDB composite key delimiter used within the partition key value. The actual Redis key uses ~ (tilde) as the separator and includes the table name and field names as prefixes. 

Why the LLM gets it wrong 

The root cause is stopping too early. The LLM finds a plausible-looking answer at the wrong abstraction layer and treats it as the final answer. 

Failure mode What happens 
Stops at the wrong layer Finds the partition key construction (RESTAURANT#<id>) and assumes this is the Redis key. Never traces through the CachedSession wrapper that transforms it into the actual Redis key. 
Misses cross-repo code The final Redis key is assembled by generateCacheKey() in a shared DAL library, a separate repository. The LLM has no way to discover or read this code. 
Confuses cache layers The codebase has three cache layers: BigCache (in-process), GoCache (in-process), and Redis (network). The LLM cannot distinguish which client actually crosses the Redis boundary. 
No verification The LLM has no way to confirm its answer. It cannot read the client’s function body to verify it actually makes a Redis network call versus an in-process memory operation. 

How AI Architect + Skills get it right 

Bito AI Architect provides two things that eliminate context-blind answers on codebase questions: a context engine that can search and read code across the entire organization, and skills that instruct the LLM exactly how to use it. 

The context engine 

AI Architect’s indexed search tools give the LLM the ability to dynamically traverse code paths across repositories. The LLM can search for symbols across all indexed repositories, read function bodies at specific file and line locations, follow imports and dependencies into shared libraries, and discover repository relationships automatically. This replaces the trial-and-error approach of grep search, glob patterns, and file reads that a general-purpose LLM relies on. 

This means the LLM can trace a value from its construction site in the application service all the way through the shared DAL library to the actual Redis client call, exactly like a senior engineer would. 

The Codebase Explorer Skill 

The Codebase Explorer Skill instructs the LLM on how to trace values across repositories to their actual storage boundary. It provides a structured methodology called The One Test that forces the LLM to complete three verification lines before producing any output: 

  • Line 1, The Scalar: “The exact value that crosses the storage boundary is: ___” 
  • Line 2, The Producer: “I read the function at [file:line] that produces this scalar.” 
  • Line 3, The Client: “I read the function body of [client method] at [file:line] and confirmed it issues a [protocol] call, not an in-process operation.” 

The skill also includes anti-pattern tables, anti-rationalization gates, and explicit instructions for handling cross-repository DAL libraries. For instance, if the LLM encounters a shared serialization library, the skill instructs it to search for and read the serialization function rather than hand-waving with “the library handles it internally.” 

The actual trace: step by step 

Here is the exact sequence AI Architect followed to produce the correct answer. 

Step 1: map all Redis access paths 

The skill requires mapping every way the codebase reaches Redis before searching for specific keys. AI Architect found three cache layers: 

  • Direct Redis: internal/cache/redis/ using go-redis/v9 (used for reviews pagination, not ratings) 
  • CachedSession (Redis + DynamoDB): internal/db/ddb/redis_kv.go wrapping the shared DAL library (used for ratings) 
  • BigCache: internal/cache/bigcache/ (in-process L1 cache, not Redis) 

Step 2: follow the transformation chain 

Starting from the GetEntityRating() function in the ratings handler: 

  • getPartitionKeyForAggregatedRatings(entity) produces RESTAURANT#<id>, the DynamoDB partition key value, not the Redis key 
  • getCompositeKey(partitionKey, sortKey) wraps this into a CompositeKey struct with field names pk_id and sk_id 
  • The struct is passed to RedisDDBClient.BatchGetWithContext(), which delegates to CachedSession in the shared DAL library 
  • Inside batchGetHelper(), the key is transformed by generateCacheKey(tableName, compositeKey), a function that lives in the shared DAL, a separate repository 
  • generateCacheKey builds the final string using ~ as the delimiter, concatenating: table name, PK field name, PK value, SK field name, SK value 
  • The result is passed to pipe.Get(ctx, cacheKey), a Redis pipeline GET command, confirming this is the actual Redis key 

Step 3: verify the client 

The skill required reading the actual function body of batchGetHelper to confirm it calls s.redis.Pipeline() and pipe.Exec(ctx), a real Redis network call via go-redis/v9, not an in-process cache operation. This is Line 3 of The One Test. Without this verification, the LLM could have been looking at a wrapper that routes to an in-process cache internally. The type name alone does not prove the technology. 

The correct answer 

Verified Redis key format: rnr~pk_id~RESTAURANT#<restaurantId>~sk_id~<algoName> 

Segment Value Source 
Table name rnr config/default.yml, runtime configuration 
PK field name pk_id constants.go, DynamoDB partition key attribute name 
PK value RESTAURANT#<id> reviews.go, utils.go, entity type + delimiter + entity ID 
SK field name sk_id constants.go, DynamoDB sort key attribute name 
SK value <algoName> ratings.go, algorithm name passed from handler 
Delimiter convert.go (shared DAL), keyDelimiter constant 

Conclusion 

AI Architect gives the LLM a methodology for reasoning about code, not just access to it. The context engine provides the ability to explore across repositories and abstraction layers. The skills provide the discipline to explore correctly, verifying each step before moving to the next. 

The difference between RESTAURANT#<restaurantId>#<algoName> and rnr~pk_id~RESTAURANT#<restaurantId>~sk_id~<algoName> is the difference between a confident guess at the wrong abstraction layer and a verified answer traced from source code. In a production system processing millions of events daily, that difference matters. 

If your team is asking LLMs questions about a large, multi-repository codebase and accepting answers without a verification trace, this is the gap you are living with. 

Set up AI Architect on your own codebase, or try the Codebase Explorer Skill on ours. 

Picture of Anand Das

Anand Das

Anand is Co-founder and CTO of Bito. He leads technical strategy and engineering, and is our biggest user! Formerly, Anand was CTO of Eyeota, a data company acquired by Dun & Bradstreet. He is co-founder of PubMatic, where he led the building of an ad exchange system that handles over 1 Trillion bids per day.

Picture of Amar Goel

Amar Goel

Amar is the Co-founder and CEO of Bito. With a background in software engineering and economics, Amar is a serial entrepreneur and has founded multiple companies including the publicly traded PubMatic and Komli Media.

Written by developers for developers red heart icon

This article is brought to you by the Bito team.

Latest posts

Introducing Bito’s Slack Agent 

One Knowledge Graph Powering Design & Scoping, Coding, and Review

The Redis Key an LLM Got Wrong and AI Architect Got Right

How Software Design Documents Shape AI Code Quality 

AI Compressed Coding from Weeks to Hours but Technical Design still runs on Tribal Knowledge

Top posts

Introducing Bito’s Slack Agent 

One Knowledge Graph Powering Design & Scoping, Coding, and Review

The Redis Key an LLM Got Wrong and AI Architect Got Right

How Software Design Documents Shape AI Code Quality 

AI Compressed Coding from Weeks to Hours but Technical Design still runs on Tribal Knowledge

From the blog

The latest industry news, interviews, technologies, and resources.

Bito's Slack Agent

Introducing Bito’s Slack Agent 

arrow bito ai
One Knowledge Graph Powering Design, Scoping, Coding, Review

One Knowledge Graph Powering Design & Scoping, Coding, and Review

arrow bito ai
The Redis key an LLM got wrong and AI Architect got right

The Redis Key an LLM Got Wrong and AI Architect Got Right

arrow bito ai