Volume Configuration
How to declare data volumes for database entities and how volume tiers affect performance rule severity, merge gates, and scan results in Technical Debt Radar.
Volume Configuration
Volume declarations tell Technical Debt Radar how much data each database entity holds. This drives severity scaling for performance rules -- an unbounded findMany() on a 500-row config table is fine, but the same pattern on a 10-million-row events table is a production incident waiting to happen.
# radar.yml
data_volumes:
orders: L
events: XL
users: M
companies: S
measurements: XXL
Volume Tiers
| Tier | Row Estimate | Row Range | Effect on Performance Rules |
|---|---|---|---|
| S | < 10K | 0 -- 10,000 | Most performance warnings suppressed |
| M | 10K -- 100K | 10,000 -- 100,000 | Warnings shown in PR comment |
| L | 100K -- 1M | 100,000 -- 1,000,000 | Severity elevated to Critical |
| XL | 1M -- 50M | 1,000,000 -- 50,000,000 | Blocks merge |
| XXL | > 50M | 50,000,000+ | Blocks merge |
Note: Volume tiers are estimates, not exact row counts. Pick the tier that reflects your entity's expected scale in production, not its current dev environment size. An
eventstable with 1,000 rows today but expected to grow to millions should be declared asXL.
How Volumes Affect Scan Results
Performance Rule Severity Scaling
Each performance rule's severity is adjusted based on the volume of the entity involved:
| Performance Rule | S | M | L | XL | XXL |
|---|---|---|---|---|---|
unbounded-find-many | Suppressed | Warn | Critical | Block | Block |
find-many-no-where | Suppressed | Warn | Critical | Block | Block |
nested-include-large-relation | Suppressed | Warn | Critical | Block | Block |
n-plus-one-query | Warn | Warn | Critical | Block | Block |
fetch-all-filter-in-memory | Suppressed | Warn | Critical | Block | Block |
missing-pagination-endpoint | Suppressed | Warn | Warn | Critical | Block |
unfiltered-count-large-table | Suppressed | Suppressed | Warn | Critical | Block |
raw-sql-no-limit | Warn | Warn | Critical | Block | Block |
How it works in practice:
// This code queries the "orders" entity (declared as L)
const orders = await prisma.order.findMany();
// ^^^^^^^^
// unbounded-find-many on L entity → CRITICAL (blocks merge)
// Same pattern on "companies" entity (declared as S)
const companies = await prisma.company.findMany();
// ^^^^^^^^
// unbounded-find-many on S entity → SUPPRESSED (not reported)
Gate Threshold Interaction
Volume-adjusted severity feeds into the gate metrics:
- A finding elevated to Critical counts toward
critical_performance_risk - A finding elevated to Block counts toward
critical_performance_risk - Gate condition
critical_performance_risk > 0blocks the merge
This means a single unbounded query on an XL table can block a PR, even if the default performance gate threshold is generous.
Declaring Volumes
Manual Declaration
Add data_volumes to your radar.yml with entity names as keys and volume tiers as values:
data_volumes:
users: M
orders: L
order_items: L
events: XL
audit_logs: XXL
measurements: XXL
categories: S
settings: S
roles: S
permissions: S
Entity names should match your ORM model names (in snake_case). The policy engine matches these against ORM queries detected in your code.
Auto-Detection
Run radar init to auto-detect volumes from your ORM schema. The estimator supports:
| ORM | Schema Source | Detection Method |
|---|---|---|
| Prisma | prisma/schema.prisma | Parses model blocks, counts relations, checks @@index on createdAt |
| TypeORM | *.entity.ts files | Parses @Entity() decorators, counts relation decorators, checks @Index on timestamps |
| MikroORM | *.entity.ts files | Parses @Entity({ tableName }), counts relations |
| Sequelize | *.model.ts files | Parses class extends Model and sequelize.define(), checks indexes |
| Mongoose | *.model.ts files | Parses mongoose.model(), checks .index() definitions |
| Drizzle | schema.ts / *.table.ts | Parses pgTable(), mysqlTable(), sqliteTable() calls |
Heuristic Estimation
When auto-detecting, the estimator uses entity naming patterns as a starting point:
| Name Pattern | Default Tier | Reasoning |
|---|---|---|
measurement*, metric*, telemetry*, trace* | XXL | Time-series data, append-only, grows unbounded |
event*, log*, audit*, activity*, notification* | XL | Event streams, high write throughput |
order*, invoice*, payment*, subscription*, transaction* | L | Transactional data, grows with business volume |
user*, account*, team*, org*, company* | M | Entity data, bounded by customer count |
config*, setting*, category*, type*, role*, permission* | S | Reference data, rarely changes |
| Everything else | S | Conservative default |
Adjustments applied after heuristic estimation:
| Signal | Adjustment |
|---|---|
| 4+ relation fields on the model | Bump one tier (S -> M, M -> L, etc.) |
Index on createdAt or timestamp | Bump to at least XL (suggests time-series pattern) |
Tip: Always review auto-detected volumes. The heuristic is a starting point -- your production data patterns may differ. A
widgettable with millions of rows won't be detected as XL by name alone.
Volume-Aware Examples
Example 1: N+1 Query on Different Volumes
Consider this code:
// src/orders/use-cases/list-orders.ts
const users = await prisma.user.findMany({ where: { active: true } });
for (const user of users) {
const orders = await prisma.order.findMany({
where: { userId: user.id },
});
// process orders...
}
This is an N+1 query. The scan result depends on volumes:
users volume | orders volume | Finding | Severity |
|---|---|---|---|
| S | S | N+1 query detected | Warn |
| M | L | N+1 query on L entity | Critical |
| M | XL | N+1 query on XL entity | Block |
Example 2: Missing Pagination
// src/events/controllers/events.controller.ts
@Get()
async listEvents() {
return this.eventsService.findAll();
}
// src/events/services/events.service.ts
async findAll() {
return this.prisma.event.findMany();
}
Two issues detected, both volume-dependent:
events volume | missing-pagination-endpoint | unbounded-find-many |
|---|---|---|
| S | Suppressed | Suppressed |
| M | Warn | Warn |
| L | Warn | Critical |
| XL | Critical | Block |
| XXL | Block | Block |
Example 3: Fetch-All and Filter in Memory
// Bad: fetches all orders, filters in application code
const allOrders = await prisma.order.findMany();
const recentOrders = allOrders.filter(
(o) => o.createdAt > thirtyDaysAgo,
);
orders volume | Finding | Severity | Fix |
|---|---|---|---|
| S | fetch-all-filter-in-memory | Suppressed | -- |
| M | fetch-all-filter-in-memory | Warn | Add where clause |
| L | fetch-all-filter-in-memory | Critical | Add where clause |
| XL | fetch-all-filter-in-memory | Block | Add where clause |
The fix is the same regardless of volume:
// Good: filter at the database level
const recentOrders = await prisma.order.findMany({
where: { createdAt: { gte: thirtyDaysAgo } },
});
Entity Mapping
The volume estimator maps entity names from your data_volumes to ORM queries using snake_case normalization:
data_volumes key | Matches these Prisma models | Matches these TypeORM entities |
|---|---|---|
orders | Order | OrderEntity, @Entity('orders') |
order_items | OrderItem | OrderItemEntity, @Entity('order_items') |
user_events | UserEvent | UserEventEntity |
The matching is case-insensitive and handles common naming conventions (PascalCase models, snake_case tables, pluralization).
Warning: If your ORM model name doesn't match your
data_volumeskey after snake_case conversion, the volume tier won't be applied. Ensure your keys match your actual table/model names.
Undeclared Entities
When a performance rule detects a query on an entity that isn't declared in data_volumes, the default tier is S (suppressed for most rules). This means:
- Undeclared entities won't block merges for performance issues
- You'll miss real problems on high-volume tables you forgot to declare
Run radar volumes to see which entities are queried in your codebase but missing from data_volumes:
radar volumes --check
# Output:
# Declared: orders (L), events (XL), users (M)
# Missing: invoices (queried in 3 files), audit_logs (queried in 7 files)
# Suggestion: Add these to data_volumes in radar.yml
Best Practices
-
Declare all entities that could be large. Start with auto-detection, then review and adjust.
-
Use production-scale estimates. A table with 100 rows in dev but 10 million in production should be
XL. -
Reassess quarterly. As your product grows, entities move up tiers. A
userstable at M today might be L next quarter. -
Err on the side of larger tiers. An
Ldeclaration on an entity that's actuallyMproduces extra warnings. AnSdeclaration on an entity that's actuallyXLsilently misses critical issues. -
Pair with
data_volumesexceptions for legitimate patterns:
# radar.yml
data_volumes:
events: XL
# rules.yml
exceptions:
- rule: "unbounded-find-many"
file: "src/admin/services/event-export.service.ts"
expires: "2026-06-01"
reason: "Admin-only export with streaming — not a production risk"