analysis

Maintainability

Seven maintainability patterns covering complexity, duplication, dead code, missing tests, and coverage tracking. Warns only, never blocks merge.

Maintainability Patterns

Maintainability patterns track code quality metrics that accumulate over time. Unlike architecture, runtime, and performance rules, maintainability violations never block merge -- they produce warnings or info-level findings that appear in PR comments and the dashboard.

Pattern Reference

Rule IDDescriptionSeverityDebt Points
high-complexityCyclomatic complexity exceeds thresholdwarning1 per point over threshold
code-duplicationCopy-pasted code blocks across fileswarning/info1-3 based on block size
missing-test-fileSource file has no corresponding test filewarning1
unused-exportExported symbol not imported anywhereinfo1
low-test-coverageLine coverage below thresholdwarning1-3 based on coverage level
low-branch-coverageBranch coverage below thresholdinfo1
coverage-dropCoverage decreased beyond thresholdwarning1-3 based on drop size

1. Cyclomatic Complexity (high-complexity)

Cyclomatic complexity measures the number of independent paths through a function. The default threshold is 10 -- functions exceeding this threshold generate a warning.

How Complexity Is Calculated

The complexity calculator starts at 1 (the function itself) and increments for each decision point:

ConstructComplexity Added
if statement+1
for / for...in / for...of loop+1
while / do...while loop+1
case clause (in switch)+1
catch clause+1
Ternary expression (? :)+1
&& (logical AND)+1
|| (logical OR)+1

Nested functions are excluded -- they have their own complexity score.

Delta Tracking

Radar tracks complexity changes between the PR's base and head commits. The previous version of each file is reconstructed by reverse-applying the git diff patch. This allows the PR comment to show complexity deltas:

Function 'processOrder' complexity: 8 → 14 (+6)

Violation Example

// Complexity: 15 (threshold: 10)
function validateOrder(order: Order, config: Config): ValidationResult {
  const errors: string[] = [];

  if (!order.items || order.items.length === 0) {           // +1
    errors.push('Order must have items');
  }

  for (const item of order.items) {                          // +1
    if (item.quantity <= 0) {                                // +1
      errors.push(`Invalid quantity for ${item.name}`);
    } else if (item.quantity > config.maxQuantity) {         // +1 (else-if = new if)
      errors.push(`Quantity exceeds max for ${item.name}`);
    }

    if (item.price < 0) {                                   // +1
      errors.push(`Invalid price for ${item.name}`);
    }

    if (config.requireSKU && !item.sku) {                   // +1 (if) +1 (&&)
      errors.push(`Missing SKU for ${item.name}`);
    }
  }

  if (order.total > config.maxOrderTotal) {                  // +1
    if (order.customerTier === 'vip' || order.override) {   // +1 (if) +1 (||)
      // VIP exception
    } else {                                                 // +1 (implicit else-if)
      errors.push('Order total exceeds maximum');
    }
  }

  const isValid = errors.length === 0;                       // +1 (ternary below)
  return { valid: isValid, errors: isValid ? [] : errors };

  // Total: 1 (base) + 14 (decision points) = 15
}

Refactored Version

function validateOrder(order: Order, config: Config): ValidationResult {
  const errors = [
    ...validateItems(order.items, config),
    ...validateTotal(order, config),
  ];
  return { valid: errors.length === 0, errors };
}

function validateItems(items: OrderItem[], config: Config): string[] {
  if (!items?.length) return ['Order must have items'];

  return items.flatMap(item => [
    item.quantity <= 0 ? `Invalid quantity for ${item.name}` : null,
    item.quantity > config.maxQuantity ? `Quantity exceeds max for ${item.name}` : null,
    item.price < 0 ? `Invalid price for ${item.name}` : null,
    config.requireSKU && !item.sku ? `Missing SKU for ${item.name}` : null,
  ].filter(Boolean) as string[]);
}

function validateTotal(order: Order, config: Config): string[] {
  if (order.total <= config.maxOrderTotal) return [];
  if (order.customerTier === 'vip' || order.override) return [];
  return ['Order total exceeds maximum'];
}

2. Code Duplication (code-duplication)

Token-based copy-paste detection across files. The detector normalizes code (strips comments, collapses whitespace, replaces string literals) and uses MD5 hashing to find identical code blocks.

How Detection Works

  1. Normalize each source file: strip comments, collapse whitespace, replace string contents with empty quotes, remove import lines
  2. Extract chunks of minLines (default: 6) consecutive normalized lines
  3. Hash each chunk with MD5
  4. Group chunks by hash -- if the same hash appears in 2+ different files, it is a duplicate
  5. Merge adjacent matching chunks into larger blocks
  6. Deduplicate overlapping block reports

Configuration Defaults

ParameterDefaultDescription
minLines6Minimum number of consecutive lines to count as duplication
minTokens15Minimum token count (identifiers + keywords) in the block
ignoreImportstrueExclude import/require lines from comparison
ignoreTeststrueExclude test files from duplication analysis

Severity Scaling

Block SizeSeverityDebt Points
6-15 linesinfo1
16-30 lineswarning2
31+ lineswarning3

What Is Excluded

The normalizer skips these constructs to avoid false positives:

  • Empty lines, braces-only lines ({, }, });)
  • Comments (single-line //, block /* */)
  • Decorators (@Injectable, @Get, etc.)
  • Type definitions (interface, type, enum)
  • Import/export type declarations

Violation Example

6 lines duplicated from src/orders/services/order.service.ts:42

Suggestion: Extract the duplicated logic into a shared function or module to reduce maintenance burden.


3. Missing Test File (missing-test-file)

Detects source files that have no corresponding test file. The detector looks for .spec.ts or .test.ts files in these locations:

  • Same directory: user.service.spec.ts
  • __tests__ subdirectory: __tests__/user.service.spec.ts
  • Parallel test/ or tests/ directory: test/user/user.service.spec.ts

Testable File Types

Only files matching these NestJS conventions are checked:

SuffixExample
.service.tsorder.service.ts
.controller.tsorder.controller.ts
.resolver.tsorder.resolver.ts
.guard.tsauth.guard.ts
.pipe.tsvalidation.pipe.ts
.interceptor.tslogging.interceptor.ts
.middleware.tsauth.middleware.ts
.gateway.tsevents.gateway.ts
.use-case.tscreate-order.use-case.ts
.handler.tsprocess-payment.handler.ts
.repository.tsorder.repository.ts

Excluded From Testing Requirements

These files are not expected to have test files:

  • *.module.ts, *.entity.ts, *.model.ts, *.dto.ts
  • *.interface.ts, *.type.ts, *.types.ts, *.enum.ts
  • *.constant.ts, *.constants.ts, *.config.ts
  • *.decorator.ts, index.ts, *.d.ts
  • Test files themselves (*.spec.ts, *.test.ts)

Violation Example

// src/billing/services/invoice.service.ts exists
// But there is no:
//   - src/billing/services/invoice.service.spec.ts
//   - src/billing/services/__tests__/invoice.service.spec.ts
//   - test/billing/services/invoice.service.spec.ts

// Violation message:
// "invoice.service.ts has no corresponding test file -- consider adding invoice.service.spec.ts"

4. Unused Exports / Dead Code (unused-export)

Detects exported functions, classes, and variables that are never imported by any other file in the project.

How Detection Works

  1. Extract exports from all non-excluded source files using ts-morph
  2. Extract import references from all files (including excluded ones -- they can consume exports)
  3. Cross-reference to find exports with zero usage count
  4. Report unused exports as violations

Excluded From Analysis

These files are never checked for unused exports (they are expected to export things consumed externally):

  • Entry points: main.ts, server.ts, app.ts, bootstrap.ts, cli.ts
  • Barrel files: index.ts, index.js
  • NestJS modules: *.module.ts
  • Test files: *.spec.ts, *.test.ts, __tests__/*
  • Config files: *.config.ts
  • Declaration files: *.d.ts

Type exports (interface, type, enum) are excluded by default to avoid noise from TypeScript-only constructs.

Namespace Import Handling

A import * as utils from './utils' marks all exports from ./utils as used. This is correct because the consumer has access to every exported member.

Violation Example

Exported function 'formatLegacyDate' is not imported anywhere -- consider removing it

Suggestion: Remove the unused export or convert it to a non-exported declaration if still used locally.


5. Low Test Coverage (low-test-coverage)

Detects files where line coverage falls below the configured threshold. Supports Istanbul/c8/nyc coverage reports in both coverage-summary.json and lcov.info formats.

Coverage Report Discovery

Radar searches for coverage files in this order:

  1. Custom paths specified in config
  2. coverage/coverage-summary.json
  3. coverage/lcov.info
  4. .nyc_output/coverage-summary.json
  5. coverage/lcov/lcov.info

Default Thresholds

MetricDefault Threshold
Line coverage60%
Branch coverage50%
Max coverage drop5%

Severity Scaling

Line CoverageDebt Points
< 30%3
30% - 50%2
50% - threshold1

Excluded From Coverage Checks

excludePatterns:
  - "**/*.dto.ts"
  - "**/*.entity.ts"
  - "**/*.module.ts"

6. Low Branch Coverage (low-branch-coverage)

Detects files where branch coverage falls below the configured threshold (default: 50%). Branch coverage measures whether both sides of every conditional (if/else, ternary, switch) have been exercised.

Severity: info (lighter than line coverage)


7. Coverage Drop (coverage-drop)

Detects files where coverage decreased compared to the baseline. A drop of more than 5% (default) generates a warning. This catches PRs that add untested code to previously well-tested files.

Severity Scaling

Coverage DropDebt Points
> 20%3
10% - 20%2
5% - 10%1

Violation Example

Coverage dropped 12.3% (85.0% -> 72.7%) -- exceeds 5% threshold

Suggestion: Add tests to restore coverage -- 12.3% drop likely from untested new code.

Scoring

Maintainability violations have lighter scoring than blocking categories:

scoring:
  complexity_point: 1          # Per point over threshold
  missing_tests: 3             # Per missing test file
  coverage_drop_per_pct: 2     # Per percentage point of coverage drop
  complexity_reduced: -1        # Credit for reducing complexity

Configuration

Maintainability thresholds are configurable:

# In radar.yml or rules.yml
gates:
  warn:
    - metric: complexity_increase
      operator: ">"
      value: 0
    - metric: duplication_percentage
      operator: ">"
      value: 5
    - metric: missing_test_files
      operator: ">"
      value: 0

Tip: While maintainability rules never block by default, you can configure gates to block on extreme values. For example, adding metric: complexity_increase, operator: ">", value: 20 to block_merge would block PRs that increase total complexity by more than 20 points.

Technical Debt Radar Documentation