Skip to content

Deduplication

Request deduplication prevents duplicate concurrent requests. If multiple callers request the same resource at the same time, only one HTTP request is made — the others wait and receive the same result.

When a request passes through the dedupe layer:

  1. The request URL and parameters are hashed into a key
  2. registerOrJoin() is called atomically — exactly one caller becomes the owner
  3. The owner makes the HTTP request
  4. Non-owners call waitFor() and receive the result when the owner completes
  5. If the owner fails, waiters receive undefined (not a thrown error)
Caller A ──→ registerOrJoin() ──→ isOwner: true ──→ fetch() ──→ complete(result)
Caller B ──→ registerOrJoin() ──→ isOwner: false ──→ waitFor() ─────────┘
Caller C ──→ registerOrJoin() ──→ isOwner: false ──→ waitFor() ─────────┘
import { HttpClient } from '@http-client-toolkit/core';
import { InMemoryDedupeStore } from '@http-client-toolkit/store-memory';
const client = new HttpClient({
dedupe: new InMemoryDedupeStore(),
});
const dedupe = new InMemoryDedupeStore({
jobTimeoutMs: 300_000, // 5 minutes — stale jobs are cleaned up
cleanupIntervalMs: 60_000, // Cleanup interval for timed-out jobs
});

All built-in stores implement atomic registerOrJoin() so exactly one caller executes the upstream request, even under heavy concurrency. If you implement a custom dedupe store, expose registerOrJoin to get the same strict single-owner behavior.

Deduplication and caching work well together. The cache prevents repeated requests over time, while dedup prevents concurrent duplicate requests within a single moment:

const client = new HttpClient({
cache: new InMemoryCacheStore(),
dedupe: new InMemoryDedupeStore(),
});
// First request: cache miss → dedupe owner → fetch → cache result
// Concurrent request: cache miss → dedupe joiner → waits → gets same result
// Later request: cache hit → returned immediately (no dedupe needed)