Skip to content

Store Interfaces

All store backends implement these interfaces from @http-client-toolkit/core. You can use them to build custom store backends.

import type { CacheStore } from '@http-client-toolkit/core';
MethodSignatureDescription
get(hash: string) => Promise<T | undefined>Retrieve a cached value by hash
set(hash: string, value: T, ttlSeconds: number) => Promise<void>Store a value with TTL
delete(hash: string) => Promise<void>Remove a cached entry
clear(scope?: string) => Promise<void>Remove cached entries. When scope is provided, only entries whose key starts with that prefix are removed. When omitted, all entries are cleared
setWithTags(hash: string, value: T, ttlSeconds: number, tags: string[]) => Promise<void>Store a value with TTL and associate it with tags for later invalidation
invalidateByTag(tag: string) => Promise<number>Delete all entries associated with a tag. Returns the count of entries removed
invalidateByTags(tags: string[]) => Promise<number>Delete all entries associated with any of the given tags. Returns the count of unique entries removed
ValueBehavior
ttlSeconds > 0Expires after N seconds
ttlSeconds === 0Never expires
ttlSeconds < 0Immediately expired
import type { DedupeStore } from '@http-client-toolkit/core';
MethodSignatureDescription
register(hash: string) => Promise<string>Register as owner, returns job ID
registerOrJoin(hash: string) => Promise<{ jobId: string; isOwner: boolean }>Atomic register or join (optional)
waitFor(hash: string) => Promise<T | undefined>Wait for the owner to complete
complete(hash: string, value: T) => Promise<void>Mark job as complete with result
fail(hash: string, error: Error) => Promise<void>Mark job as failed
isInProgress(hash: string) => Promise<boolean>Check if a job is in-flight

registerOrJoin() is the preferred atomic path. It returns { jobId, isOwner }:

  • Owner (isOwner: true): Makes the HTTP request, then calls complete() or fail()
  • Joiner (isOwner: false): Calls waitFor() to receive the result

If the owner fails, waiters receive undefined (not a thrown error).

import type { RateLimitStore } from '@http-client-toolkit/core';
MethodSignatureDescription
acquire(resource: string) => Promise<boolean>Atomic check-and-record (optional)
canProceed(resource: string) => Promise<boolean>Check if a request can proceed
record(resource: string) => Promise<void>Record a request
getStatus(resource: string) => Promise<RateLimitStatus>Get current rate limit status
reset(resource: string) => Promise<void>Reset rate limit for a resource
getWaitTime(resource: string) => Promise<number>Get wait time in ms until next request
setCooldown(origin: string, cooldownUntilMs: number) => Promise<void>Store a server-driven cooldown for an origin (optional)
getCooldown(origin: string) => Promise<number | undefined>Retrieve a server-driven cooldown timestamp for an origin (optional)
clearCooldown(origin: string) => Promise<void>Remove a server-driven cooldown for an origin (optional)
interface RateLimitStatus {
remaining: number;
resetTime: Date;
limit: number;
}

setCooldown, getCooldown, and clearCooldown are optional methods. When implemented, the HttpClient delegates server-driven cooldown storage (from 429/503 responses with Retry-After headers) to the store instead of keeping them in a private in-memory map. This allows cooldowns to propagate across multiple HttpClient instances sharing the same store.

If these methods are not implemented, the client falls back to an internal Map — cooldowns are per-client-instance only.

Extends RateLimitStore with priority support.

import type { AdaptiveRateLimitStore } from '@http-client-toolkit/core';

The canProceed, record, and getWaitTime methods are overloaded to accept an optional priority parameter. getStatus returns adaptive metrics in a nested adaptive property:

canProceed(resource: string, priority?: 'user' | 'background'): Promise<boolean>;
record(resource: string, priority?: 'user' | 'background'): Promise<void>;
getWaitTime(resource: string, priority?: 'user' | 'background'): Promise<number>;
getStatus(resource: string): Promise<RateLimitStatus & {
adaptive?: {
userReserved: number;
backgroundMax: number;
backgroundPaused: boolean;
recentUserActivity: number;
reason: string;
};
}>;

To create a custom backend, implement one or more of these interfaces:

import type { CacheStore, DedupeStore, RateLimitStore } from '@http-client-toolkit/core';
class RedisCacheStore<T> implements CacheStore<T> {
async get(hash: string): Promise<T | undefined> {
// ... Redis GET
}
async set(hash: string, value: T, ttlSeconds: number): Promise<void> {
// ... Redis SET with EX
}
async delete(hash: string): Promise<void> {
// ... Redis DEL
}
async clear(scope?: string): Promise<void> {
// ... Redis FLUSHDB or pattern delete
// When scope is provided, only delete keys starting with that prefix
}
async setWithTags(hash: string, value: T, ttlSeconds: number, tags: string[]): Promise<void> {
// ... Redis SET + SADD for each tag
}
async invalidateByTag(tag: string): Promise<number> {
// ... Redis SMEMBERS + DEL
}
async invalidateByTags(tags: string[]): Promise<number> {
// ... Redis SUNION + DEL
}
}

Use Zod for config validation and export from a single src/index.ts entry point to follow the existing store pattern.