Audit Your TypeScript Tooling: Metrics to Prove a Tool Is Worth Keeping
Audit TypeScript tooling with measurable metrics and scripts to prove which tools earn their keep — usage, CI cost, PR friction, and type-safety.
Is that tool earning its keep? How to prove it with numbers (not opinions)
You probably recognize the pain: CI runs that balloon after adding a new bundler plugin, a linter rule that turns into a weekly argument on Slack, or a commercial tool that charges every month while only a couple of teams actually use it. Opinions and gut feelings won’t convince your CTO — you need metrics.
This guide (2026 edition) gives you a reproducible, data-driven playbook to audit TypeScript tooling. I’ll show the precise metrics to collect, ready-to-run scripts, and a scoring system you can apply across repos so teams can prune safely.
Why run a tooling audit in 2026?
Tooling churn accelerated in 2023–2025: rapid adoption of esbuild, swc, and monorepo-first toolchains; rising usage of AI-powered assistants; and more sophisticated CI costing models. By late 2025 many engineering teams noticed cloud CI bills rising and developer cycles slowing; 2026 is the year to take control.
A tooling audit is not about removing everything. It’s about retaining the tools that deliver measurable value along the dimensions your org cares about: speed, type-safety, developer productivity, and cost.
Metrics that matter (and how to collect them)
Use this checklist to gather evidence. For each metric I include what it shows, why it matters, and copy-pasteable scripts you can run in most TypeScript repos.
1) Usage (actual code adoption)
What to measure: percentage of TypeScript source files that import or reference a tool. If only a few files use it, the tool may be a candidate for removal or replacement.
Why it matters: Underused tools still cost money (licenses, CI, cognitive load). Usage is the first filter.
Quick shell (ripgrep) check — counts files importing a package and computes percent of *.ts|*.tsx:
total_files=$(rg -n --files "\.(ts|tsx)$" | wc -l)
matches=$(rg -n --files "from ['\"]my-tool['\"]|require\(['\"]my-tool['\"]\)" --glob "**/*.{ts,tsx}" | wc -l)
printf "Total TS files: %s\nFiles importing my-tool: %s\nUsage: %.2f%%\n" "$total_files" "$matches" "$(echo "scale=4; $matches/$total_files*100" | bc)"
Node script (more resilient, supports ESM/CJS and scoped packages):
/* usage-check.js */
const glob = require('glob');
const fs = require('fs');
const pkg = process.argv[2] || 'my-tool';
const files = glob.sync('**/*.{ts,tsx,js,jsx}', { ignore: 'node_modules/**' });
let uses = 0;
for (const f of files) {
const content = fs.readFileSync(f, 'utf8');
if (new RegExp(`from ['\"]${pkg}['\"]|require\(['\"]${pkg}['\"]\)`).test(content)) uses++;
}
console.log(`files: ${files.length}, uses: ${uses}, pct: ${((uses/files.length)*100).toFixed(2)}%`);
2) CI time and cost impact
What to measure: average wall-clock time for your CI workflows, and the delta introduced by the tool (plugin step, extra build). Translate minutes into dollars if you pay for CI minutes or runner time.
Why it matters: A single slow tool can multiply into dozens of hours across a team. Faster pipelines = faster feedback loop = fewer broken merges.
Collect from GitHub Actions API (Node example) — compute mean workflow duration (minutes):
/* ci-duration.js */
const { Octokit } = require('@octokit/rest');
const octokit = new Octokit({ auth: process.env.GH_TOKEN });
(async () => {
const owner = process.env.GH_OWNER; const repo = process.env.GH_REPO; const workflow_id = process.env.WORKFLOW_ID;
const runs = await octokit.paginate(octokit.actions.listWorkflowRuns, { owner, repo, workflow_id, per_page: 100 });
const durations = runs.map(r => (new Date(r.updated_at) - new Date(r.run_started_at))/60000);
const mean = durations.reduce((a,b)=>a+b,0)/durations.length;
console.log(`runs: ${runs.length}, mean minutes: ${mean.toFixed(2)}`);
})();
Measure step-level times: add a tiny step in the workflow around the tool step to print timestamps. Example GitHub Actions snippet:
- name: Before tool
run: date +%s
- name: Run tool (webpack/esbuild/linter)
run: npm run build:with-tool
- name: After tool
run: date +%s
Subtract timestamps and aggregate across runs to see average step cost.
3) PR friction (time-to-merge, review churn)
What to measure: median time from PR open to merge, mean number of review cycles, and number of PRs blocked by tooling errors (e.g., lint fails in CI). Use these to quantify developer drag.
Why it matters: High friction slows delivery and is often where teams are willing to remove or relax strict tooling.
Octokit script (time-to-merge and review counts):
/* pr-metrics.js */
const { Octokit } = require('@octokit/rest');
const octokit = new Octokit({ auth: process.env.GH_TOKEN });
(async () => {
const { data: prs } = await octokit.pulls.list({ owner: process.env.GH_OWNER, repo: process.env.GH_REPO, state: 'closed', per_page: 100 });
const closedMerged = prs.filter(p => p.merged_at);
const durations = closedMerged.map(p => (new Date(p.merged_at) - new Date(p.created_at))/3600000); // hours
const avgHours = durations.reduce((a,b)=>a+b,0)/durations.length;
console.log(`merged PRs: ${closedMerged.length}, avg hours to merge: ${avgHours.toFixed(1)}`);
// For review count, iterate PRs and list reviews
})();
Add a label in CI when a PR fails tooling so you can count "tooling-failed" PRs. This creates an auditable signal of PRs affected by the tool.
4) Type-safety improvement (type coverage & error trends)
What to measure: percentage of code covered by types (type coverage), number of type errors caught at build time, and trend of types preventing runtime issues.
Why it matters: If a tool claims to improve type-safety (a static analysis plugin, stricter tsconfig), you must prove it reduces type errors or increases type coverage.
Use the type-coverage tool (npm: type-coverage). It computes a percent score that’s easy to trend:
npx type-coverage --tsconfig ./tsconfig.json --report
Run this on your baseline branch and compare to feature branches or after enabling a tool. For historical trend, automate it in CI and store the value as a build artifact or comment on PRs.
Track compiler errors over time — collect tsc errors from CI logs and count unique diagnostics. A steady drop in compiler errors after enabling a tool is hard evidence.
5) Linting noise and preventable warnings
What to measure: rule-by-rule ESLint error/warning counts and how often rules cause CI failure or PR block.
Why it matters: A linter is only useful if its rules provide signal, not noise. Too many noisy rules increase PR friction.
Export ESLint JSON and aggregate per-rule:
npx eslint "src/**/*.{ts,tsx}" -f json -o eslint-report.json
node -e "const r=require('./eslint-report.json'); const m={}; r.flat().forEach(f=>f.messages.forEach(msg=>{m[msg.ruleId]=(m[msg.ruleId]||0)+1})); console.log(m)"
Remove or relax rules where the false-positive rate is high and the educational value is low.
6) Build output (bundle size & runtime cost)
What to measure: bundle size, critical resource size (first load), and how a tool changes those numbers.
Why it matters: Savings in CI minutes are great, but shipping larger bundles to users can cost conversions and customer satisfaction.
Quick comparison: build with and without the tool and diff the dist sizes:
npm run build:with-tool && du -sh dist | awk '{print $1}' > with.txt
git clean -fd && git checkout -- package.json && npm run build:without-tool && du -sh dist | awk '{print $1}' > without.txt
paste with.txt without.txt
Use source-map-explorer or webpack-bundle-analyzer for deeper insights.
7) Maintenance and security costs
What to measure: how often the tool requires updates (churn), how many dependabot alerts, and the cognitive cost of upgrading major versions.
Why it matters: A low-usage tool that constantly needs version surgery is a maintenance tax.
Count commits that touch a package or number of dependabot PRs over 6–12 months as part of your audit.
Putting it together: scoring and decision matrix
Metrics are useful only when combined into a decision framework. Here’s a simple scoring system you can implement in a script or spreadsheet.
- Normalize each metric to 0–100 (higher is better). Example: usage% maps linearly; CI delta maps negatively (faster yields higher score).
- Weight metrics by company priorities (example weights below).
- Compute weighted sum to produce a score; set thresholds for Keep / Replace / Remove.
Example weights (adjust to taste):
- Usage: 25%
- CI time impact: 20%
- PR friction: 20%
- Type-safety improvement: 20%
- Maintenance & security: 10%
- ROI/cost: qualitative override
Decision thresholds (example): score > 70 Keep, 40–70 Improve/Replace, < 40 Remove or defer.
Sample audit runbook (a 2-week sprint)
- Week 0: Identify all candidate tools (bundlers, linters, formatters, plugins, commercial SaaS)
- Week 1: Run the scripts in this article to collect raw metrics across repos
- Week 1: Create dashboard (CSV/Google Sheet) and compute scores
- Week 2: Triage – schedule experiments (e.g., replace webpack build with esbuild for a smaller pipeline subset)
- Week 2+: Run canary changes, collect before/after metrics, and make decisions
Real-world example (hypothetical)
A mid-sized SaaS team in late 2025 replaced a heavy webpack+ts-loader build step in their CI with an esbuild-based pipeline for test builds. The data showed:
- Average CI worker time for build step dropped from 12 minutes to 2.5 minutes (–9.5 minutes).
- PR median time-to-merge dropped 8% (faster feedback loop).
- Type coverage remained stable; no lost errors were observed in production over a 3-month window.
- Net savings: ~250 CI minutes per day across multiple branches; estimated $1,500/month in CI credits saved (depends on provider).
That level of evidence made the decision trivial — the team removed the old build and saved money and developer time.
Practical scripts you can copy into your repo
Below are compact versions you can paste into a tools/ folder and run in CI or locally.
1) measure-build-times.sh
#!/usr/bin/env bash
set -e
OUT=build-times.csv
echo "tool,run,seconds" > $OUT
for tool in "with-tool" "without-tool"; do
for i in $(seq 1 5); do
start=$(date +%s)
npm run build:$tool
end=$(date +%s)
echo "$tool,$i,$((end-start))" >> $OUT
done
done
2) eslint-rule-agg.js
/* eslint-rule-agg.js */
const fs = require('fs');
const report = JSON.parse(fs.readFileSync('eslint-report.json', 'utf8'));
const counts = {};
report.flat().forEach(f => f.messages.forEach(m => { counts[m.ruleId] = (counts[m.ruleId]||0) + 1; }));
console.log(JSON.stringify(counts, null, 2));
How to interpret noisy or ambiguous results
Not every metric will be decisive. Here are common gotchas and how to handle them:
- Low usage but high runtime benefit: A tool used by few files might be critical for a high-traffic path — check runtime telemetry before removing.
- Short-term flakiness: CI slowdowns might be transient; analyze 30–90 day windows to avoid knee-jerk removals.
- Type-safety regressions: If removing a tool reduces type coverage, consider replacing it with a lighter alternative instead of removing typing checks entirely.
2026 trends to factor into your decisions
A few ecosystem shifts changed the calculus for tooling audits in 2026:
- Faster compilers and bundlers: esbuild and swc matured into first-class builders for many workflows. If your tooling causes long CI times, trying these alternatives is a low-effort experiment with high potential ROI.
- Bundlerless and edge-first patterns: Some teams moved critical logic to edge runtimes and smaller bundles — impacting which tools make sense.
- Cloud CI cost scrutiny: Providers made minute-based billing and usage dashboards more visible, meaning optimizing CI time is an immediate financial win.
- AI-assisted developers: Copilot-style assistants reduce some friction but increase the importance of strict, predictable type checks — making tradeoffs between speed and type-safety more nuanced.
"Make the audit repeatable. Collect metrics continuously, not as a one-off exercise." — Best practice
Actionable takeaways
- Start with Usage and CI time — these two metrics often reveal the cheapest wins.
- Automate collection: add scripts to CI to record build times, type-coverage, and ESLint reports as artifacts.
- Run a small canary: change one repo or branch, measure before/after, and scale the change if metrics improve.
- Use a weighted scoring matrix and make decisions transparent to stakeholders (Engineering, Product, Security, Finance).
Next steps & call-to-action
Ready to run an audit? Copy the scripts into your repo, run them across your active repos, and assemble the CSVs into a dashboard. If you want a jumpstart, download our lightweight audit workbook (spreadsheet + scripts) and use it to run a 2-week tooling review in your org.
Do one measurable audit this quarter. The cost of inaction is silent: wasted CI minutes, frustrated engineers, and subtle production regressions. Start with usage and CI time today — those metrics will buy you credibility to make bigger changes.
Want the scripts as a starter kit? Fork the repo, run the tools in a CI job, and share the results with your team. If you’d like, drop the anonymized CSVs into a shared doc and I’ll give feedback on the thresholds you should use.
Related Reading
- Music & Audio Lover Bundles: Album Art Prints + Headphone Mugs
- Pop-Up Tailoring: How to Partner with Convenience Retailers for Fast Growth
- How Apple’s Siri-Gemini Deal Will Reshape Voice Control in Smart Homes
- Travel Content Strategy 2026: Writing SEO-Optimized Destination Guides from The Points Guy Picks
- How Bluesky’s LIVE Badges and Cashtags Could Help Sitcom Fan Creators
Related Topics
Unknown
Contributor
Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.
Up Next
More stories handpicked for you
Building a Minimal TypeScript Stack for Small Teams (and When to Say No to New Tools)
When Your Dev Stack Is a Burden: A TypeScript Checklist to Trim Tool Sprawl
Building Offline‑First TypeScript Apps for Privacy‑Focused Linux Distros
Secure Defaults for TypeScript Apps That Want Desktop or Device Access
The Future of Type Design: Insights from Apple's Design Evolution
From Our Network
Trending stories across our publication group