Skip to content

Memory Store

@http-client-toolkit/store-memory provides in-memory store implementations. Fast, zero-dependency stores ideal for development, testing, or single-process production use.

Terminal window
npm install @http-client-toolkit/store-memory

LRU cache with TTL support and dual eviction limits (item count + memory usage).

import { InMemoryCacheStore } from '@http-client-toolkit/store-memory';
const cache = new InMemoryCacheStore({
maxItems: 1000, // Default: 1000
maxMemoryBytes: 50_000_000, // Default: 50 MB
cleanupIntervalMs: 60_000, // Default: 60s. Set to 0 to disable.
evictionRatio: 0.1, // Default: 10% evicted when limits exceeded
});
OptionTypeDefaultDescription
maxItemsnumber1000Maximum number of cached entries
maxMemoryBytesnumber50_000_000Maximum memory usage in bytes
cleanupIntervalMsnumber60_000Interval for expired entry cleanup. 0 to disable.
evictionRationumber0.1Fraction of entries evicted when limits exceeded

When either limit is exceeded, the store evicts the least recently used entries. Expired entries are also removed lazily on get() and during scheduled cleanup.

Call cache.destroy() when done to clear the cleanup timer.

Prevents duplicate concurrent requests. If a request for the same hash is already in-flight, subsequent callers wait for the original to complete.

import { InMemoryDedupeStore } from '@http-client-toolkit/store-memory';
const dedupe = new InMemoryDedupeStore({
jobTimeoutMs: 300_000, // Default: 5 minutes
cleanupIntervalMs: 60_000, // Default: 60s
});
OptionTypeDefaultDescription
jobTimeoutMsnumber300_000Timeout for in-flight jobs before cleanup
cleanupIntervalMsnumber60_000Interval for stale job cleanup

The store implements atomic registerOrJoin() so exactly one caller executes the upstream request under heavy concurrency.

Sliding window rate limiter with optional per-resource configuration.

import { InMemoryRateLimitStore } from '@http-client-toolkit/store-memory';
const rateLimit = new InMemoryRateLimitStore({
defaultConfig: { limit: 60, windowMs: 60_000 },
resourceConfigs: new Map([
['slow-api', { limit: 10, windowMs: 60_000 }],
]),
});
OptionTypeDefaultDescription
defaultConfig{ limit, windowMs }RequiredDefault rate limit for all resources
resourceConfigsMap<string, { limit, windowMs }>undefinedPer-resource overrides

Priority-aware rate limiter that dynamically allocates capacity between user and background requests based on recent activity patterns.

import { AdaptiveRateLimitStore } from '@http-client-toolkit/store-memory';
const rateLimit = new AdaptiveRateLimitStore({
defaultConfig: { limit: 200, windowMs: 3_600_000 },
resourceConfigs: new Map([
['search', { limit: 50, windowMs: 60_000 }],
]),
adaptiveConfig: {
highActivityThreshold: 10,
moderateActivityThreshold: 3,
monitoringWindowMs: 900_000,
maxUserScaling: 2.0,
},
});
OptionTypeDefaultDescription
highActivityThresholdnumber10User requests to trigger high-activity mode
moderateActivityThresholdnumber3User requests to trigger moderate mode
monitoringWindowMsnumber900_000Activity monitoring window (15 min)
maxUserScalingnumber2.0Maximum user capacity multiplier
Activity LevelBehavior
HighPrioritizes user requests, pauses background if trend is increasing
ModerateBalanced allocation with trend-aware scaling
LowScales up background capacity
Sustained inactivityGives full capacity to background