Testing AWS Integrations in TypeScript Without Touching the Cloud: A Practical Local Emulation Playbook
Use an AWS emulator to run fast, realistic TypeScript integration tests for S3, DynamoDB, SQS, Secrets Manager, and event-driven workflows.
Testing AWS Integrations in TypeScript Without Touching the Cloud: A Practical Local Emulation Playbook
If you build TypeScript services that depend on AWS, the difference between “works on my machine” and “trusted in CI” usually comes down to one thing: realistic integration tests. A fast AWS emulator gives you a middle ground between fragile mocks and expensive cloud-based test stacks. For teams working with S3, DynamoDB, SQS, Secrets Manager, and event-driven workflows, local emulation can dramatically improve feedback loops while still preserving the shape of production code. This guide shows how to design that setup in a way that is fast, repeatable, and friendly to both local development and CI/CD testing, while still leaving room for persistence, Docker workflows, and SDK compatibility concerns. If you are also thinking about broader developer operating models, it helps to pair this with a strong system for documentation and modularity, like the principles in documentation, modular systems and open APIs, because test infrastructure only scales when the team can understand and extend it.
In practice, the goal is not to perfectly recreate AWS. The goal is to recreate the failure modes, request patterns, and service interactions your TypeScript code actually depends on. That means writing integration tests that validate object writes to S3, queue fan-out through SQS, item persistence in DynamoDB, secret lookup behavior, and event-driven orchestration. A good emulation workflow also gives you a disciplined way to evaluate local-vs-cloud tradeoffs, similar to how infrastructure teams compare deployment models in Cloud vs On-Prem for Clinical Analytics. You are choosing the system that best optimizes developer confidence per unit of time, not just the cheapest line item.
Why AWS emulation matters for TypeScript teams
Mocks are too shallow for real integration risk
Unit tests with stubs are useful, but they rarely surface the bugs that happen at service boundaries. A mocked S3 client won’t tell you whether your object keys are wrong, your body encoding is malformed, or your retry policy breaks when a request is delayed. A mock DynamoDB layer also cannot reveal whether your schema assumptions survive a real conditional write or whether your marshalling logic changes under different inputs. The more your app depends on cross-service sequencing, the less useful pure mocks become.
This is where local service emulation earns its keep. By running real SDK calls against an emulator, your tests verify serialization, request construction, idempotency logic, and error handling with far more fidelity. For teams used to fast experimentation, it feels closer to using a realistic prototype than a sketch on paper, which is why the mindset overlaps with the advice in Prototype Fast for New Form Factors. The difference is that your prototype is now an executable contract for your AWS integrations.
CI confidence depends on repeatability
CI/CD testing fails when test fixtures drift, credentials expire, or external systems inject noise. A local AWS emulator removes authentication complexity and gives you a deterministic environment you can spin up in GitHub Actions, GitLab CI, or any container-based pipeline. That does not just save time; it also reduces the category of outages where a build is red because someone rotated a secret or because a shared dev account hit a service limit. In teams that care about reliable release trains, repeatability is as valuable as speed.
There is also an organizational payoff. If every engineer can start the same test stack with Docker and run the same integration suite, you dramatically reduce “works on one laptop” friction. This mirrors the operational value of a consistent toolchain, much like the checklists in making content findable by LLMs emphasize structured signals and predictable formats. Predictability is the whole game, whether you are optimizing for search or for builds.
Local emulation shortens the edit-test-debug loop
One of the biggest productivity gains is speed. A well-tuned emulator launches quickly, keeps resource usage low, and lets developers iterate on code paths without waiting for a cloud stack to come alive. That matters most when you are debugging event-driven code, where one failure can ripple across multiple services. Instead of redeploying to a staging account for every hypothesis, you can run the test locally, inspect the behavior, and fix it in minutes.
That rapid loop especially helps teams building distributed systems in TypeScript, where compile-time checks are only half the story. Runtime behavior often depends on environment variables, AWS SDK configuration, payload shape, and small JSON edge cases. When your tests are local, you can probe those cases more frequently, just as product teams improve decisions by moving from assumptions to experiments in survey to sprint workflows.
What Kumo gives you as an AWS emulator
Core features that map well to development and CI
Kumo is a lightweight AWS service emulator written in Go. The most relevant characteristics for TypeScript teams are simple: no authentication required, a single binary, Docker support, lightweight startup, AWS SDK v2 compatibility, and optional data persistence via KUMO_DATA_DIR. That combination makes it practical for both laptops and ephemeral CI workers. The lack of auth is not a weakness in this context; it is what makes the emulator frictionless for test environments where the goal is controlled realism rather than production-grade perimeter defense.
For teams that care about deployment ergonomics, the single-binary model also matters. You can place the emulator in a container image, mount a data directory when needed, and run the same service in local development or CI. This is the kind of operational simplicity that often makes or breaks adoption, similar to how teams evaluate workflow tooling in A Developer’s Framework for Choosing Workflow Automation Tools. The best tool is the one your team can actually run every day.
Supported services that matter for serverless workflows
For this playbook, the most important services are S3, DynamoDB, SQS, Secrets Manager, Lambda, EventBridge, SNS, Step Functions, and API Gateway. Those services are enough to emulate many of the common patterns in modern TypeScript backends: event ingestion, queue processing, object storage, stateful processing, and secret-backed configuration. Kumo’s larger service surface can be useful later, but the basic value starts with these building blocks.
That broad coverage is important because integration failures rarely come from a single service in isolation. A file upload may trigger an event; the event may land in a queue; the worker may load a secret; then it may write to DynamoDB and store an artifact in S3. The orchestration matters more than any one API call. If your test stack can represent that sequence end to end, you can validate the business logic that actually matters.
Persistence is the difference between toy tests and useful tests
Optional persistence lets you simulate state across restarts. That is especially useful for CI jobs that intentionally restart a service, or for local scenarios where you want to verify recovery behavior. With KUMO_DATA_DIR, you can preserve objects, queue messages, table rows, and secret state so your test environment behaves more like a durable system. This becomes critical when you are testing idempotency, replay handling, or compensation logic.
Persistence also helps you catch test-order dependencies before they become production problems. If one test pollutes state and another test assumes a clean slate, the emulator will expose that hidden coupling. In that sense, local persistence is not just a feature; it is a diagnostic tool. It functions a bit like the cautionary framing in once-only data flow: the point is to reduce duplication and eliminate implicit state surprises.
Architecture of a practical local integration test stack
Use Docker Compose as your orchestration layer
For most teams, Docker Compose is the simplest way to package an emulator, your TypeScript app, and any helper services. Compose gives you a stable network, startup ordering, and a reproducible configuration file that lives in source control. It also makes it easy to add environment variables for endpoint overrides, region settings, and test-specific secrets. When all services are containers, the developer experience becomes much more predictable than a hand-run set of binaries.
In a typical setup, you would run Kumo in one container and point your AWS SDK clients at its local endpoint. Your application container or test runner can wait for the emulator health check, seed data, then execute integration tests. That design is conceptually similar to the discipline used when building realistic but contained systems, as seen in thermal camera deployment decisions: the environment should be realistic enough to matter, but constrained enough to be repeatable.
Keep endpoint wiring in one place
TypeScript projects often become brittle when every test file hand-configures AWS clients differently. Instead, centralize AWS endpoint configuration in a small client factory. That factory should accept the emulator endpoint, region, and credential placeholders, and it should switch cleanly between emulator mode and real AWS mode. Doing this means your production code can remain clean while tests override only the environment-specific parts.
For example, if you use AWS SDK v3, define one module that exports configured S3, DynamoDB, SQS, and Secrets Manager clients. Then point those clients to the local emulator during tests by setting custom endpoints and a dummy credential provider. This keeps your test harness close to production behavior and lowers the chance of accidental divergence. Teams that like structured integration patterns often benefit from the same kind of API-first discipline described in API-first approach to building a developer-friendly payment hub.
Seed, exercise, assert, and clean up
Strong integration tests usually follow a four-step pattern. First, seed the emulator with the minimum state needed for the scenario. Second, exercise the workflow through your application entry point or worker handler. Third, assert on the persisted side effects in the emulator. Fourth, clean up or isolate state so the next test starts from a known baseline. This keeps tests deterministic while still giving you real system behavior.
The key is to avoid testing implementation details that the emulator can already prove. Instead of checking whether a function was called, check whether the object was stored, the queue message was emitted, the secret was read, or the item landed in DynamoDB with the right shape. That style of testing is closer to the reality of distributed systems and easier to trust.
TypeScript patterns for AWS SDK v3 and v2 compatibility
Prefer SDK v3 in new TypeScript code
If you are starting fresh, the AWS SDK v3 is usually the better fit for TypeScript because it is modular, tree-shakeable, and easier to compose in modern builds. You can keep each service client isolated, use middleware more cleanly, and avoid bundling the world into every runtime artifact. When tests run against an emulator, this modularity helps because you can replace only the relevant service clients without changing the rest of your application design.
That said, the same local test strategy is valuable even if you are not on a fully modern stack. Teams that are comparing SDK approaches or trying to control rollout risk can borrow the same decision logic used when evaluating tech refreshes in year-in-tech planning: prioritize what reduces operational debt without forcing a risky big-bang migration.
Design an adapter layer for SDK v2 legacy code
Many TypeScript teams still have some AWS SDK v2 usage, especially in older Node services or long-lived monoliths. If that is your situation, do not rewrite everything just to start testing locally. Instead, create a small adapter layer that abstracts the handful of AWS operations your application actually needs. Your test harness can then swap between v2 and v3 implementations while still pointing both at the emulator.
This is where Kumo’s AWS SDK v2 compatibility matters conceptually, even if your TypeScript code does not directly use the Go SDK. It demonstrates a focus on API shape and compatibility rather than a single language runtime. In your application, a similar compatibility mindset prevents the emulator from becoming a one-off side project and turns it into a durable part of the engineering stack.
Protect against credential and region assumptions
Real AWS clients often fail in local emulation because the code assumes metadata service access, IAM role resolution, or default regions that do not exist in a test container. The fix is not to bury those assumptions under more mocks. The fix is to make them explicit in a configuration module and test them directly. Set region, endpoint, and credentials intentionally in tests, and use the same values across local and CI runs.
In other words, integration testing is also configuration testing. Many release failures come from environment assumptions rather than logic errors. If your code is disciplined about configuration boundaries, the emulator becomes a reliable signal instead of a source of noise.
Service-by-service testing patterns for S3, DynamoDB, SQS, and Secrets Manager
S3: verify object naming, content, and overwrite behavior
For S3 tests, validate more than “the file exists.” Check the object key convention, MIME type, body content, metadata, and overwrite semantics. A common bug is to store generated assets under a path that later code cannot discover consistently. Another common issue is forgetting that file contents may be binary or line-ending sensitive, which only shows up when the real object store is exercised. Local emulation gives you a cheap way to catch those mistakes before they hit production.
It is also worth testing how your code behaves when the object already exists. Some workflows should overwrite cleanly; others should enforce immutability. That distinction is often business-critical, and it is hard to verify with mocks. If you work with artifacts, uploads, or export pipelines, treat S3 integration tests as part of the contract, not an optional extra.
DynamoDB: assert on schema shape and conditional writes
With DynamoDB, the most valuable tests are usually about item shape, key design, and conditional logic. Verify that your application stores the correct partition and sort keys, that nested attributes are marshalled as expected, and that idempotent updates do not duplicate records. Conditional writes are especially important because they surface concurrency assumptions that pure unit tests rarely reveal. Even in local emulation, those semantics are far more informative than a hand-rolled in-memory map.
When you rely on TTL, secondary indexes, or version fields, write tests that reflect the workflow you actually ship. If your code expects retries after a failed update, simulate the failure path and ensure the retry does not corrupt state. Good tests make hidden assumptions visible. That is the same logic that powers good taxonomy design: you need stable categories and stable keys, which is why guidance like taxonomy design in e-commerce is surprisingly relevant to data modeling.
SQS: validate fan-out, message shape, and retry behavior
SQS tests should focus on the shape of messages, receipt/ack behavior, and the workflow that consumes them. A message payload can be technically valid JSON and still be useless to your consumer if fields are missing or encoded incorrectly. Your tests should send real messages through the emulator, then validate that the worker drains the queue, transforms the payload, and emits the expected downstream side effects. This is how you catch integration mismatches that only appear when the producer and consumer are both running.
For retry behavior, simulate a consumer failure and confirm the message reappears as expected or lands in your dead-letter flow. Event-driven systems are often defined more by what happens when something fails than by the success path. Treat those scenarios as first-class test cases, not edge cases.
Secrets Manager: test configuration resolution without real secrets
Secrets Manager emulation is especially helpful for services that load configuration at startup. Instead of hardcoding environment values into tests, store mock secrets in the emulator and let your application fetch them the same way it would in production. This proves that your startup logic, secret names, and JSON parsing are correct. It also keeps the test environment realistic without risking exposure of actual credentials.
Do not overcomplicate it. For local and CI tests, you need values that are structurally representative, not production-sensitive. By treating secret access as an integration point, you avoid the false confidence that comes from only testing the happy path with environment variables.
Testing event-driven workflows end to end
Model the workflow, not just the function
The strongest use of an AWS emulator is in event-driven workflows where one service triggers another. For example, a TypeScript API may write metadata to DynamoDB, store the uploaded asset in S3, and send a message to SQS for downstream processing. The worker may then retrieve a secret, perform business logic, and emit a completion event. Testing that chain locally proves the system behaves as a system, which is what integration testing should do.
That system-level mindset is also why eventing tools such as EventBridge and Step Functions are worth emulating when your architecture depends on them. Even if you are not using every service in the test stack, thinking in workflow terms helps you design scenarios that mirror production behavior. It is similar to how designers think about user journeys, but for infrastructure.
Use explicit triggers for deterministic tests
Event-driven testing gets messy when asynchronous timing is left to chance. Use explicit triggers, polling helpers, and bounded wait times so your tests are deterministic. For example, after sending a queue message, poll for the DynamoDB record or emitted output file until it appears or the timeout expires. This pattern makes your tests resilient to small delays without turning them into flaky sleep-based scripts.
If your workflow has multiple branches, write separate tests for each branch rather than one giant scenario. Small, focused tests are easier to debug and give you clearer failure signals. They also make it simpler to maintain coverage when the workflow evolves.
Capture observability signals in the emulator loop
Even in a local environment, you should log request IDs, emitted message IDs, and state transitions. Observability is not just for production; it is how you diagnose why a local integration test failed. When possible, structure your test harness so you can inspect the emulator state and your app logs together. This creates a much tighter feedback loop than reading stack traces alone.
For teams thinking beyond unit boundaries, this is analogous to the way security teams think about automated defenses: the point is to reduce the time between signal and response. The same logic shows up in sub-second attack defense strategies, where speed and evidence matter at once.
Docker-based workflows for local development and CI/CD testing
Local developer workflow
A good local workflow should require little more than docker compose up and one test command. That means the emulator should start quickly, the app should connect to the local endpoint automatically, and your test scripts should seed and teardown data consistently. Developers should not need special credentials or AWS accounts to validate core integration behavior. That keeps the barrier low and the test surface high.
When the setup is clean, it becomes natural to run integration tests early and often, especially during feature work. That is exactly the behavior you want: the emulator should be a first-class part of the inner development loop, not a separate “maybe later” environment. It is much easier to maintain trust in the test suite when running it is as simple as opening your editor.
CI workflow
In CI, the emulator should be treated as disposable infrastructure. Pull a container image, start the service, run the tests, and discard it. Because Kumo does not require authentication, it avoids a whole class of setup complexity that often slows down pipelines. If you enable persistence in CI, do so intentionally for a specific test class, not by default for everything. Most pipeline jobs should remain stateless unless they are specifically validating recovery or restart behavior.
For teams managing fast-moving releases, the real value is not just test coverage but pipeline stability. A deterministic test stack can reduce noisy failures and make deployment gates much more trustworthy. If you are also trying to optimize budget or hardware choices for your team, the decision framework in best budget laptops that still feel fast after a year is a useful reminder: sustained performance matters more than peak specs.
Comparing emulator modes, persistence, and workflow fit
| Mode | Best use case | Pros | Tradeoffs |
|---|---|---|---|
| Ephemeral emulator in Docker | Most CI jobs and fast local tests | Fast startup, clean state, easy teardown | No cross-run state, less suited for recovery tests |
| Persistent emulator with data dir | Restart and replay scenarios | Durable state, realistic recovery testing | Requires cleanup discipline |
| Shared local emulator instance | Developer sandbox or demo environment | Convenient, easy to inspect state | State contamination risk |
| Per-branch CI emulator | Feature branch validation | Isolated results, strong reproducibility | More container startup overhead |
| Hybrid local + cloud parity checks | Pre-release verification | Combines speed with real AWS confirmation | More maintenance than emulator-only |
The table above is the decision matrix most teams need. If your main goal is fast local feedback, ephemeral mode wins. If your workflow depends on restart semantics or persistence across service restarts, enable the data directory and test it deliberately. If you need cloud confidence before release, use the emulator for daily work and reserve live AWS checks for a narrower smoke suite.
How to layer realism without losing speed
Add persistence only where the scenario needs it
Persistence is powerful, but it should be applied selectively. If every test keeps state forever, the emulator becomes harder to reason about and much more expensive to reset. The right pattern is to use clean ephemeral runs for most coverage and persistent runs for a smaller number of critical tests. For example, you might keep one suite focused on restart recovery, while the rest use fresh containers each time.
This balanced approach is similar to managing supply-chain resilience: you want realistic continuity, but not at the cost of operational clutter. The logic behind flexible local supply chains maps well to test infrastructure. You are building optional resilience, not unnecessary complexity.
Combine emulator tests with contract tests
Emulator tests are not the whole story. They should sit alongside contract tests, schema validation, and a small number of real AWS smoke tests where appropriate. The emulator proves your code talks to AWS-shaped services correctly; contract tests prove your interfaces remain stable; cloud smoke tests prove your assumptions still work in the real world. Used together, these layers give you much stronger confidence than any one layer alone.
That layered approach is also a good way to communicate test strategy to stakeholders. It says, “We are not gambling on mocks, and we are not overpaying for cloud-only validation.” It is a pragmatic middle path that tends to age well.
Measure what the emulator actually improves
Do not adopt local emulation on vibes alone. Track cycle time, flaky test rate, CI duration, and the number of integration bugs caught before staging. If the emulator is doing its job, you should see faster feedback, fewer environment-related failures, and clearer failures when something truly breaks. Those are concrete benefits that justify the extra setup work.
When presenting this internally, use simple evidence, not abstract promises. A team can debate tooling preferences forever, but it is much harder to argue with a release pipeline that is measurably faster and more reliable.
Common implementation mistakes to avoid
Do not let test code drift from production clients
The most common mistake is creating a test-only AWS wrapper that no production code uses. It feels convenient at first, but it slowly destroys trust because tests stop reflecting real client behavior. Instead, share the same client factory, endpoint configuration, and data shaping code between environments. Then your tests exercise the same paths production uses.
Another mistake is hiding emulator-specific details deep in business logic. Keep those details at the boundary layer so your domain code stays clean. The cleaner the boundary, the easier it is to switch between emulator and real AWS later.
Avoid overfitting tests to emulator quirks
An emulator is a tool, not an oracle. If you discover a behavior that differs from AWS, record it, work around it only if necessary, and keep a small cloud smoke test to verify critical assumptions. That is especially important for edge-case semantics like eventual consistency, IAM policy enforcement, or service-specific quirks. The point is to build confidence, not to pretend equivalence where it does not exist.
This is a good place to keep your architecture honest with outside signals, much like how enterprise buyers watch vendor indicators in VC signals for enterprise buyers. The signal is useful, but it should be interpreted carefully, not blindly.
Keep the suite small enough to run often
If your emulator suite grows too large, people will stop running it. Focus on the workflows that matter most: object writes, queue processing, secret lookups, and state updates. The purpose of integration tests is to catch the boundaries where bugs are expensive, not to recreate every production permutation. A compact but high-value suite is almost always better than a sprawling one.
That restraint is what keeps the system maintainable. Good local test architecture is a lot like good product packaging: the best setup feels simple because the complexity is hidden, not because the complexity does not exist.
Step-by-step adoption plan for a TypeScript team
Phase 1: start with one service pair
Begin with the most valuable integration pair in your application, usually S3 plus SQS or DynamoDB plus Secrets Manager. Put the emulator behind a single endpoint configuration file, write one high-value test, and make sure the team can run it locally and in CI. This first win should be boring and repeatable. Once that is true, expand the pattern rather than redesigning it.
When teams start small, adoption tends to stick. You are building confidence through evidence, not through a big architectural announcement. That is often the difference between a tool becoming essential and becoming shelfware.
Phase 2: add persistence and failure cases
After the first suite works, add one persistent scenario that validates restart behavior or state recovery. Then add one failure-path test for each critical workflow. The reason to expand this way is simple: the happy path is usually easy, but the failures are what keep production safe. By intentionally testing restarts, retries, and malformed inputs, you make the suite much more valuable.
You can think of this as resilience tuning, not feature creep. In infrastructure, the most important question is often how the system behaves under stress, not whether it works once.
Phase 3: wire into CI and treat it as a gate
Once the local suite is stable, make it part of your CI gate. Keep the container startup deterministic, seed data consistently, and fail fast on unexpected state. Over time, add only the smallest number of cloud smoke tests needed to confirm real AWS compatibility for the most important paths. That blended strategy gives you speed during development and assurance before release.
At that point, the emulator is no longer a side experiment. It becomes a core infrastructure asset that supports faster delivery and lower-risk changes.
Frequently asked questions
Is an AWS emulator good enough to replace real AWS integration tests?
For most day-to-day development and CI validation, yes, especially when you are testing request shape, workflow orchestration, and local state changes. However, you should still keep a small number of real AWS smoke tests for IAM, region behavior, and service-specific edge cases. The emulator is the fast majority path, not necessarily the final proof for every scenario.
Can I use this approach with the AWS SDK v2 in TypeScript?
Yes. The most durable approach is to create a thin adapter layer so your app code depends on your own interface, not directly on SDK version details. That way you can test legacy SDK v2 code against the emulator while gradually moving to SDK v3 where it makes sense.
How do I keep emulator-based tests deterministic in CI?
Use disposable containers, explicit seeding, bounded polling, and a cleanup strategy. Avoid random data that is not namespaced by test, and never rely on timing alone for async workflow completion. Determinism comes from explicit setup and explicit assertions.
Should I persist emulator data between test runs?
Only for tests that specifically need recovery or restart behavior. For most suites, ephemeral containers are better because they remove state contamination and simplify cleanup. Persistence is useful, but it should be deliberate.
What AWS services should I emulate first?
Start with the ones that your app uses most directly and most often: S3, DynamoDB, SQS, and Secrets Manager. Those four cover a huge percentage of common serverless and event-driven workloads. Add EventBridge, Lambda, or Step Functions when workflow coverage becomes important.
How do I explain the value of an AWS emulator to management?
Frame it in terms of faster delivery, fewer flaky tests, reduced cloud dependency, and earlier bug detection. Managers usually care less about the emulator itself and more about shortened release cycles and lower operational risk. Show them concrete metrics from before and after adoption.
Bottom line: make AWS integration testing boring, fast, and trustworthy
TypeScript teams do not need to choose between weak mocks and expensive cloud-heavy pipelines. A practical AWS emulator workflow gives you the best part of both worlds: real SDK calls, realistic service boundaries, and repeatable local and CI execution. When you combine Docker-based orchestration, optional persistence, and a clean client abstraction, you get a testing strategy that scales with the codebase instead of fighting it. That is the real win: a local environment that behaves enough like AWS to catch important bugs, but fast enough that people actually use it.
If your organization is also modernizing adjacent systems, you may find the thinking transfers nicely to other infrastructure choices such as supporting experimental features safely or once-only data flow. In each case, the pattern is the same: reduce uncertainty early, keep the feedback loop short, and make the reliable path the easiest path for the team.
Related Reading
- A Developer’s Framework for Choosing Workflow Automation Tools - A practical lens for picking orchestration tooling that fits your team.
- API-first approach to building a developer-friendly payment hub - Useful patterns for clean service boundaries and client design.
- Sub-second Attacks - A useful perspective on why fast feedback loops matter in operations.
- Prototype Fast for New Form Factors - A reminder that realistic prototypes beat abstract assumptions.
- Checklist for Making Content Findable by LLMs and Generative AI - A structured-thinking guide that maps well to test documentation.
Related Topics
Avery Mitchell
Senior TypeScript & DevOps Editor
Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.
Up Next
More stories handpicked for you
Enhancing TypeScript App Performance: Learning from Android’s Battery Management Fixes
Integrating Amazon CodeGuru with a TypeScript CI Workflow: A Playbook
AI in Advertising: Restructuring with Higgsfield
Designing Developer Performance Dashboards with TypeScript (Without Creating Perverse Incentives)
From Code Changes to ESLint Rules: Shipping Mined Static Analysis as TypeScript Plugins
From Our Network
Trending stories across our publication group