Your API endpoint takes 2 seconds to respond. The database server shows 95% CPU utilization. You add more RAM, bigger instances, read replicas. Nothing helps. The root cause isn't your infrastructure — it's a single loop in your code that generates 1,001 SQL queries instead of 2.
This is the N+1 query problem, and it's the most common performance killer hiding in Node.js applications that use ORMs like Prisma, TypeORM, and Sequelize.
What Is the N+1 Query Problem?
The N+1 problem occurs when your code executes 1 query to fetch a list of N items, then executes N additional queries to fetch related data for each item. Instead of 1-2 queries, you execute N+1.
Here's a concrete example. You need to display a page showing 100 users and their most recent order:
// This innocent-looking code generates 101 SQL queries
const users = await getUsers(); // Query 1: SELECT * FROM users (100 rows)
for (const user of users) {
user.lastOrder = await getLastOrder(user.id); // Queries 2-101: SELECT * FROM orders WHERE userId = ?
}
What happens at the database level:
-- Query 1
SELECT * FROM users;
-- Query 2
SELECT * FROM orders WHERE user_id = 1 ORDER BY created_at DESC LIMIT 1;
-- Query 3
SELECT * FROM orders WHERE user_id = 2 ORDER BY created_at DESC LIMIT 1;
-- ... 97 more queries ...
-- Query 101
SELECT * FROM orders WHERE user_id = 100 ORDER BY created_at DESC LIMIT 1;
Each query takes 2-5ms. Multiply by 101 and you get 200-500ms just for database queries. Now imagine you have 10,000 users instead of 100: that's 10,001 queries taking 20-50 seconds. Your database connection pool exhausts, requests queue up, and your API starts timing out.
The fix is usually a single query with a JOIN or a batched query:
-- Just 1 query instead of 101
SELECT users.*, orders.*
FROM users
LEFT JOIN orders ON orders.user_id = users.id
WHERE orders.id = (
SELECT id FROM orders o2
WHERE o2.user_id = users.id
ORDER BY o2.created_at DESC
LIMIT 1
);
Every major Node.js ORM provides a way to do this. Let's look at each one.
N+1 Queries in Prisma
Prisma is the most popular TypeScript ORM, and N+1 queries are easy to introduce because Prisma's API is so clean that the problematic code looks perfectly reasonable.
The problem
// ❌ N+1: 1 query for users + 1 query per user for orders
async function getUsersWithOrders() {
const users = await prisma.user.findMany();
const result = [];
for (const user of users) {
const orders = await prisma.order.findMany({
where: { userId: user.id },
orderBy: { createdAt: 'desc' },
take: 5,
});
result.push({ ...user, orders });
}
return result;
}
This code is perfectly readable. It does exactly what you'd expect. And it's catastrophically slow on any table with more than a few hundred rows.
The fix: use include
// ✅ Single query with include — Prisma generates an optimized JOIN
async function getUsersWithOrders() {
return prisma.user.findMany({
include: {
orders: {
orderBy: { createdAt: 'desc' },
take: 5,
},
},
});
}
Prisma's include generates an efficient query (often a JOIN or a batched IN query) that fetches all data in 1-2 round trips instead of N+1.
The subtle variant: map with async callback
// ❌ Still N+1 — Promise.all doesn't reduce query count
async function getUsersWithProfiles() {
const users = await prisma.user.findMany();
const usersWithProfiles = await Promise.all(
users.map(async (user) => {
const profile = await prisma.profile.findUnique({
where: { userId: user.id },
});
return { ...user, profile };
})
);
return usersWithProfiles;
}
Using Promise.all with map runs the queries concurrently, but it still executes N queries. Your database gets hit with 100 queries simultaneously, which can be even worse than sequential execution because it exhausts the connection pool instantly.
// ✅ Fix: use select with nested include
async function getUsersWithProfiles() {
return prisma.user.findMany({
include: { profile: true },
});
}
N+1 Queries in TypeORM
TypeORM has the same problem, but it manifests differently because of TypeORM's lazy loading feature and the Repository pattern.
The problem
// ❌ N+1: loop with separate find calls
async function getOrdersWithProducts(ordersRepo: Repository<Order>, productsRepo: Repository<Product>) {
const orders = await ordersRepo.find();
for (const order of orders) {
order.products = await productsRepo.find({
where: { orderId: order.id },
});
}
return orders;
}
The fix: use relations
// ✅ Single query with relations option
async function getOrdersWithProducts(ordersRepo: Repository<Order>) {
return ordersRepo.find({
relations: ['products'],
});
}
The QueryBuilder alternative for complex cases
// ✅ QueryBuilder for more control
async function getOrdersWithProducts(ordersRepo: Repository<Order>) {
return ordersRepo
.createQueryBuilder('order')
.leftJoinAndSelect('order.products', 'product')
.where('order.status = :status', { status: 'active' })
.orderBy('order.createdAt', 'DESC')
.take(50)
.getMany();
}
Watch out for lazy relations
// ❌ TypeORM lazy loading triggers N+1 silently
@Entity()
class Order {
@ManyToMany(() => Product, { lazy: true })
products: Promise<Product[]>; // each access triggers a query
}
// In your handler:
const orders = await ordersRepo.find();
for (const order of orders) {
const products = await order.products; // hidden query!
}
TypeORM's lazy relations use JavaScript Promises to defer loading. This means every time you await order.products, it fires a SQL query. In a loop, this is a classic N+1.
N+1 Queries in Sequelize
Sequelize is the oldest Node.js ORM and has the most mature eager loading API. But N+1 queries are still common, especially when developers forget to use include.
The problem
// ❌ N+1: fetching comments in a loop
async function getPostsWithComments() {
const posts = await Post.findAll();
for (const post of posts) {
post.comments = await Comment.findAll({
where: { postId: post.id },
});
}
return posts;
}
The fix: use include for eager loading
// ✅ Eager loading with include
async function getPostsWithComments() {
return Post.findAll({
include: [
{
model: Comment,
as: 'comments',
attributes: ['id', 'text', 'createdAt'],
},
],
});
}
Nested eager loading
// ✅ Multi-level eager loading
async function getPostsWithCommentsAndAuthors() {
return Post.findAll({
include: [
{
model: Comment,
as: 'comments',
include: [
{
model: User,
as: 'author',
attributes: ['id', 'name'],
},
],
},
],
});
}
Without the nested include, fetching 100 posts with 10 comments each would generate 100 queries for comments + 1,000 queries for authors = 1,101 queries. With eager loading, it's 1 query.
Why N+1 Is Invisible in Development
This is the most insidious thing about N+1 queries: they work perfectly in development.
In your dev environment:
- Your users table has 15 rows
- The N+1 generates 16 queries
- Each query takes 1ms (local database, no network latency)
- Total response time: 20ms
- Everything feels fast
In production:
- Your users table has 500,000 rows (paginated, so maybe 100 per page)
- The N+1 generates 101 queries
- Each query takes 5ms (network latency + load)
- Total response time: 505ms
- But on the "all users" admin page with 10,000 rows: 50 seconds
No test catches this. Unit tests use mocks. Integration tests use small datasets. Load tests often don't target the specific endpoints with N+1 queries. The problem only surfaces when real users hit real data at real scale.
Volume-Aware Detection With radar.yml
Because the severity of N+1 queries depends on your data volume, static analysis alone isn't enough. An N+1 on a table with 50 rows is a non-issue. An N+1 on a table with 50 million rows is a production emergency.
Technical Debt Radar uses volume-aware detection. You declare your table sizes in radar.yml:
# radar.yml — data volume declarations
data_volumes:
users: L # 100K - 1M rows
orders: XL # 1M - 10M rows
events: XXL # 10M+ rows
settings: S # under 1K rows
categories: S # under 1K rows
# Volume tiers:
# S = under 1K rows → N+1 severity: info
# M = 1K - 100K rows → N+1 severity: low
# L = 100K - 1M rows → N+1 severity: medium
# XL = 1M - 10M rows → N+1 severity: high (blocks merge)
# XXL = 10M+ rows → N+1 severity: critical (blocks merge)
When the scanner finds an N+1 query pattern, it cross-references the affected table with your volume declarations. The same code pattern gets different severity ratings depending on the data volume:
- N+1 on
categories(S) — info, not flagged - N+1 on
users(L) — medium, warning in PR - N+1 on
orders(XL) — high, blocks merge - N+1 on
events(XXL) — critical, blocks merge with urgent flag
This prevents false positive fatigue. Your team doesn't get alerted about N+1 queries on lookup tables with 20 rows. But they do get blocked on N+1 queries on the orders table with 5 million rows.
How Radar Catches N+1 Queries
When you run npx technical-debt-radar scan ., the scanner uses AST analysis to detect N+1 patterns. It looks for:
- ORM query calls (
findMany,findAll,find) followed by loops containing more query calls Promise.allwithmapcontaining query calls (concurrent N+1)- Lazy relation access inside loops (TypeORM-specific)
- Missing
include/relationsoptions on queries for models with associations
Example scan output:
TECHNICAL DEBT RADAR — Scan Results
Performance Issues (3 found)
────────────────────────────
CRITICAL N+1 query on XXL table
src/analytics/analytics.service.ts:34
Loop queries: prisma.event.findMany inside for..of loop
Table: events (XXL — 10M+ rows)
Estimated queries: N+1 where N = page size
→ Use prisma.user.findMany({ include: { events: true } })
HIGH N+1 query on XL table
src/orders/orders.service.ts:18
Loop queries: prisma.order.findMany inside Promise.all(map())
Table: orders (XL — 1M-10M rows)
Estimated queries: N+1 where N = user count
→ Use prisma.user.findMany({ include: { orders: true } })
INFO N+1 query on S table
src/config/config.service.ts:12
Loop queries: prisma.setting.findUnique inside for..of loop
Table: settings (S — under 1K rows)
→ Low impact due to small table size. Fix optional.
────────────────────────────
3 issues | 1 critical | 1 high | 1 info
Status: FAIL — merge blocked
Beyond N+1: Other ORM Anti-Patterns
N+1 is the most common ORM performance killer, but it's not the only one. Here are three other patterns that Technical Debt Radar detects:
Unbounded findMany / findAll
// ❌ On a table with 50M rows, this loads ALL of them into memory
const allEvents = await prisma.event.findMany();
// ✅ Always paginate
const events = await prisma.event.findMany({
take: 50,
skip: page * 50,
orderBy: { createdAt: 'desc' },
});
Fetch-all, filter-in-memory
// ❌ Fetches 1M rows, filters to 10 in JavaScript
const allOrders = await prisma.order.findMany();
const activeOrders = allOrders.filter(o => o.status === 'ACTIVE');
// ✅ Filter in the database
const activeOrders = await prisma.order.findMany({
where: { status: 'ACTIVE' },
});
Missing pagination on API endpoints
// ❌ API endpoint returns unbounded results
@Get('events')
async getEvents() {
return this.prisma.event.findMany(); // could return millions of rows
}
// ✅ Enforce pagination
@Get('events')
async getEvents(@Query('page') page = 1, @Query('limit') limit = 50) {
return this.prisma.event.findMany({
take: Math.min(limit, 100), // cap at 100
skip: (page - 1) * limit,
orderBy: { createdAt: 'desc' },
});
}
All of these patterns share the same trait: they work fine in development and explode in production. Volume-aware detection is the only way to catch them before they cause incidents.
Find Your N+1 Queries Now
N+1 queries are the single biggest performance problem in Node.js applications using ORMs. They're invisible in development, undetectable by standard linters, and devastating in production.
The fix is almost always simple — add an include, use relations, or restructure a loop into a batch query. The hard part is finding them.
Scan your codebase right now:
npx technical-debt-radar scan .
First scan is free, no account needed. It takes less than 10 seconds and will flag every N+1 query in your project — with volume context so you know which ones to fix first.