TypeScript Meets WCET: Building Tooling to Integrate Timing Analysis into JS/TS CI
Build TypeScript tooling to parse WCET outputs, validate deadlines, and surface timing issues in CI — practical patterns and code for 2026.
Hook: Your builds pass but timing still fails in production — here’s how to stop that
Teams shipping safety-critical or real-time systems increasingly face a familiar, high-stakes problem: functional tests are green, static checks pass, but timing violations appear only in integration or in-field. After Vector’s January 2026 acquisition of RocqStat, timing analysis and WCET estimation are getting more attention — and you should bring those results into your CI/CD early and often. This article shows how to build TypeScript-based tooling that wraps WCET outputs, validates deadlines, and surfaces timing issues in CI pipelines.
Why integrate WCET into CI now (2026 context)
Late 2025 and early 2026 saw consolidation in the timing-analysis tooling space. Vector’s acquisition of RocqStat signaled that vendors are unifying static timing analysis (WCET) with software verification. That trend matters because:
- Regulatory pressure: Automotive (ISO 26262), avionics (DO-178C), and industrial control standards increasingly expect measurable timing evidence.
- Shift-left verification: Teams want timing regressions detected before hardware or HIL runs.
- Toolchain consolidation: Integrated toolchains (e.g., VectorCAST + RocqStat) make it easier to automate WCET outputs, but you still need glue code for CI and developer workflows.
“Timing safety is becoming a critical part of code testing workflows.” — Vector statement following the RocqStat acquisition, Jan 2026
High-level architecture: TypeScript wrapper for WCET analysis outputs
Design your tooling as a set of composable components so it’s easy to run locally and in CI. At a minimum include:
- Input parsers — parse WCET outputs (XML/JSON/CSV or vendor formats).
- Mappers — map timing measurements or estimates to source artifacts (functions, tasks, lines).
- Validators — compare WCET values to declared deadlines and thresholds.
- Reporters — emit SARIF/JSON, create PR annotations, and fail the pipeline when configured.
- Baselining — capture historical WCET baselines and allow diffs to avoid noise on large legacy products.
Why TypeScript?
TypeScript tooling is an excellent choice for this glue layer: it has strong typing for complex outputs, a mature ecosystem (XML parsers, SARIF libraries, HTTP clients), and excellent integration with modern CI ecosystems. TypeScript tooling is also easy to test and maintain across teams of different backgrounds.
Practical setup: repo layout, tsconfig, and linters
Start with a small monorepo or single repo package that can be run as a CLI in pipelines. A recommended layout:
- /src — core code (parsers, mappers, validators, reporters)
- /cli — thin CLI wrapper and argument parsing
- /fixtures — sample WCET outputs for tests
- /test — unit & integration tests
- /ci — pipeline helpers and templates
tsconfig recommendations
{
"compilerOptions": {
"target": "ES2020",
"module": "commonjs",
"lib": ["ES2020"],
"strict": true,
"esModuleInterop": true,
"forceConsistentCasingInFileNames": true,
"outDir": "dist",
"sourceMap": true
}
}
Keep strict enabled so your parsers and validators are type-safe. Add ESLint and Prettier rules for consistency. Example: enable rules that catch unchecked external API results and ensure errors are handled.
Parsing WCET outputs: patterns and TypeScript example
WCET tools emit different formats. RocqStat and other professional tools typically provide XML or JSON exports. Simpler or legacy tools may produce CSV or plain text. Plan for two modes:
- Record mode — parse a single WCET artifact generated by the toolchain for one build.
- Stream mode — streaming parsers and extract timing events incrementally to avoid high memory usage.
Example: parsing a simplified XML WCET export into typed objects using fast-xml-parser. This code shows the core idea — adapt to your vendor format.
import fs from 'fs/promises';
import { XMLParser } from 'fast-xml-parser';
interface WcetEntry {
id: string;
functionName: string;
wcetUs: number; // microseconds
path?: string[]; // optional call path
}
export async function parseWcetXml(filePath: string): Promise {
const raw = await fs.readFile(filePath, 'utf8');
const parser = new XMLParser({ ignoreAttributes: false, attributeNamePrefix: '@_' });
const doc = parser.parse(raw);
// Adjust this mapping to match your tool's XML schema
const entries = doc?.WcetReport?.Function || [];
return entries.map((e: any) => ({
id: e['@_id'],
functionName: e['@_name'],
wcetUs: Number(e['@_wcet_us']),
path: e.CallPath?.Segment?.map((s: any) => s['@_func']) || []
}));
}
For huge files, use a SAX-like parser (e.g., sax) to process events without loading everything into memory.
Mapping timing results to source code
Timing numbers are only useful when they map to the code developers change. Mapping strategies depend on the build target:
- Native embedded C/C++ — map addresses to source using symbol maps and DWARF (addr2line) or vendor-provided map files from the linker.
- Compiled JS/TS runtimes or WASM — use source maps to translate compiled offsets back to TS source lines.
- Vendor tool outputs — some tools already include function names and file paths; prefer that when available.
Example approach when you have an address (0x400abc): run a helper that calls the cross-toolchain addr2line or the vendor symbolizer and returns file:line. Wrap that in TypeScript and cache results to avoid repeating expensive symbol lookups during CI.
import { execFile } from 'child_process';
import { promisify } from 'util';
const execFileP = promisify(execFile);
export async function addrToSource(executable: string, addr: string): Promise {
try {
const { stdout } = await execFileP('arm-none-eabi-addr2line', ['-e', executable, addr]);
return stdout.trim(); // e.g. src/motor.cpp:142
} catch (err) {
return null;
}
}
Validating deadlines: rules engine and thresholds
Design your validator with these concerns in mind:
- Support per-task and per-path deadlines.
- Allow hysteresis: non-blocking warnings vs blocking failures.
- Support tolerance windows and historical baselines.
- Support aggregation (sum WCETs on a scheduling path or per-period workloads).
Example validation algorithm for a single task:
- Collect WCET entries for the task’s entry functions.
- Compute worst-case sum or path WCET as appropriate for your scheduling model.
- Compare computed WCET against declared deadline.
- Emit result: passed/warn/failed with explanatory metrics.
interface TaskDeadline {
taskId: string;
deadlineUs: number;
failThresholdPercent?: number; // e.g. 100 (fail at >=100%)
warnThresholdPercent?: number; // e.g. 90
}
export function validateTask(task: TaskDeadline, wcetUs: number) {
const percent = (wcetUs / task.deadlineUs) * 100;
if (percent >= (task.failThresholdPercent ?? 100)) return { status: 'fail', percent };
if (percent >= (task.warnThresholdPercent ?? 90)) return { status: 'warn', percent };
return { status: 'pass', percent };
}
Support more complex schedulability tests (response-time analysis) if your system uses preemptive scheduling; there are libraries you can integrate or you can implement RTA formulas in TS for your scheduler.
CI integration patterns: GitHub Actions, GitLab, and SARIF
Decide how failures should affect pipelines: blocker (fail CI), gate (allow but flag), or advisory (comment). Two practical ways to surface timing issues:
- Annotations and checks — use GitHub Checks API or GitLab job annotations to attach failures directly to pull requests and files.
- SARIF and static analysis dashboards — emit SARIF so tools like GitHub Code Scanning or SonarQube can consume timing issues as first-class findings.
GitHub Actions example
name: WCET Check
on: [pull_request]
jobs:
wcet:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install Node
uses: actions/setup-node@v4
with:
node-version: '18'
- name: Build and run tests
run: |
npm ci
npm run build
# run timing analysis externally (vendor tool) -> produces wcet-report.xml
- name: Run wcet-checker
run: |
node ./dist/cli.js --report wcet-report.xml --deadline manifest/deadlines.json --sarif out/wcet.sarif
- name: Upload SARIF
uses: github/codeql-action/upload-sarif@v2
with:
sarif_file: out/wcet.sarif
Use the upload-sarif action to display findings in the Checks tab. Alternatively, call the GitHub PR comment API or create annotations via the Checks API to highlight specific files/lines.
Reporting, dashboards and alerting
Provide multiple consumer views:
- Machine-readable — SARIF and JSON summaries for automated dashboards.
- PR-level — single-line annotations and summary comments for reviewers.
- Team alerts — post to Slack/Teams on CI failures with links to the artifact and reproduction steps.
Include a JSON summary with aggregated metrics (max WCET, % of deadline used, top offenders) that your monitoring stack can ingest to produce historical charts and trends.
Testing, baselines, and false positives
WCET tools are conservative. Without baselining, you’ll get many warnings. Adopt these practices to keep noise low:
- Baseline artifacts — capture a trusted WCET run and mark it as baseline. Only unblock PRs for regressions beyond a configured delta.
- Regression diffs — report delta vs baseline and let reviewers accept or reject per-PR changes.
- Unit and integration tests — unit-test your parser/validator using realistic fixtures (store in /fixtures in repo).
- Staged rollout — start with advisory mode and gradually move to blocking mode once confidence grows.
Performance and CI resource planning
Large WCET or trace files can be CPU and IO heavy. Optimize for CI:
- Run heavy vendor analyses on dedicated runners or a gated pipeline stage (e.g., nightly builds or merge-time only).
- Cache symbolization results and parsed outputs to avoid repeated expensive work.
- Use streaming parsers for large traces and limit artifact retention in CI to essential artifacts.
Security and isolation
WCET toolchains often run compilers and binary analyzers. When invoking native tools from your TypeScript wrapper:
- Run them in isolated containers or dedicated runners.
- Sanitize inputs and avoid shell interpolation to prevent command injection.
- Control permissions for CI service accounts that upload artifacts or post to PRs.
2026 trends and future-proofing your tooling
Looking forward in 2026, expect these trends to affect your work:
- Vendor integration — VectorCAST + RocqStat unification means more standardized outputs; plan for vendor-provided JSON exports and APIs.
- SARIF for timing — SARIF adoption for dynamic and timing data will grow. Consider contributing timing extension proposals if your tooling needs aren’t covered.
- Cloud verification — cloud-hosted timing and formal verification services will enable scale but need careful reproducibility practices.
- ML-assisted WCET — expect hybrid statistical/static approaches. Your tooling should accept multiple result types and label result provenance.
Actionable checklist: ship a WCET CI integration in 6 weeks
- Inventory: capture current WCET tool outputs and formats (XML/JSON/map files).
- Scaffold: create a TypeScript repo with the layout described above and strict typing enabled.
- Parser: implement and test parsers for each output type using fixtures.
- Mapper: implement address-to-source and/or source-map resolution; cache results.
- Validator: implement per-task validation and enable warn/fail modes.
- Reporter: emit SARIF and PR annotations; add GitHub Action/GitLab job templates.
- Pilot: run in advisory mode on a small team; gather feedback and build baseline runs.
- Rollout: switch to blocking policies once false positives are under control.
Real-world example: mapping a RocqStat export into a PR annotation
In a typical workflow after Vector/RocqStat integration:
- Vendor tool produces wcet.json with per-function WCET estimates.
- Your TypeScript tool parses wcet.json, maps functions to source lines via symbol addresses or provided paths, and checks against manifest/deadlines.json.
- Tool emits SARIF findings and creates a PR annotation for any function that exceeds warn/fail thresholds.
The benefit: reviewers see timing issues inline in the PR and developers can fix algorithmic regressions or scheduling assumptions before merge.
Closing: integrate timing like any other quality gate
WCET and timing analysis are becoming first-class verification artifacts in 2026. Vector’s acquisition of RocqStat is accelerating that movement. By building a TypeScript-based wrapper that parses vendor outputs, maps results to source, validates deadlines, and reports via SARIF/CI annotations, teams can detect timing regressions earlier and reduce expensive late-stage debugging.
Key takeaways
- Shift timing analysis left: run WCET validation in CI, not just in HIL sessions.
- Use TypeScript for safe, maintainable glue code that integrates parsers, symbolizers, and reporters.
- Emit SARIF/annotations so reviewers get actionable findings in PRs.
- Start advisory, then gate: baseline results to minimize noise before enforcing failures.
Call to action
Ready to integrate timing analysis into your CI? Start by cloning a sample repo (link to an example template in your org), add a minimal parser for your WCET output, and wire up SARIF reporting. If you want a ready-made starting point, consider a 2-week spike: I can provide a hands-on example repo with parser fixtures, GitHub Action templates, and SARIF reporters tailored to your WCET producer (RocqStat/VectorCAST or other). Reach out or fork the template and run it on your next PR.
Related Reading
- Advanced Strategy: Hardening Local JavaScript Tooling for Teams in 2026
- Observability & Cost Control for Content Platforms: A 2026 Playbook
- Field Review: Local-First Sync Appliances for Creators — Privacy, Performance, and On‑Device AI (2026)
- The Zero‑Trust Storage Playbook for 2026: Homomorphic Encryption, Provenance & Access Governance
- Edge‑First Layouts in 2026: Shipping Pixel‑Accurate Experiences with Less Bandwidth
- Hosting a Family ‘Critical Role’ Night: How Tabletop Roleplay Builds Emotional Skills in Kids
- Smart Patio Mood Lighting: How to Use RGBIC Lamps Outdoors Without Hurting Your Plants
- Secure Your Charity Shop's Social Accounts: Lessons from the LinkedIn Attacks
- How to List Airport Pickup Options for Your Short-Term Rental Guests
- Quantum Forecasting for Sports: Porting Self-learning NFL Predictors to Quantum Models
Related Topics
typescript
Contributor
Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.
Up Next
More stories handpicked for you