Skip to content

Store Backends Overview

HTTP Client Toolkit ships three store backends. Each implements the same interfaces (CacheStore, DedupeStore, RateLimitStore, AdaptiveRateLimitStore), so you can swap between them without changing application code.

MemorySQLiteDynamoDB
PersistenceProcess lifetimeDiskCloud
Multi-instanceNoShared fileYes
DependenciesNonebetter-sqlite3, drizzle-ormAWS SDK v3 (peer)
CleanupBackground timersBackground timersNative TTL
Best forDev, testing, single-processPersistent local, process restartsServerless, distributed

Each backend provides four store implementations:

CacheStore

Response caching with TTL. Memory uses LRU eviction; SQLite and DynamoDB use entry size limits.

DedupeStore

Request deduplication with atomic ownership. One caller fetches, others wait for the result.

RateLimitStore

Sliding window rate limiter with per-resource configuration.

AdaptiveRateLimitStore

Priority-aware rate limiter that dynamically allocates capacity between user and background requests.

import { HttpClient } from '@http-client-toolkit/core';
import {
InMemoryCacheStore,
InMemoryDedupeStore,
InMemoryRateLimitStore,
} from '@http-client-toolkit/store-memory';
const client = new HttpClient({
cache: new InMemoryCacheStore(),
dedupe: new InMemoryDedupeStore(),
rateLimit: new InMemoryRateLimitStore(),
});
  • Start with Memory for development and testing. It’s fast, has zero dependencies, and requires no setup.
  • Move to SQLite when you need data to survive process restarts, or want a single-file persistent store.
  • Use DynamoDB for serverless deployments (Lambda), multi-instance applications, or when state must be shared across processes.

You can also mix backends — for example, use in-memory cache for speed with a DynamoDB rate limiter for distributed coordination:

const client = new HttpClient({
cache: new InMemoryCacheStore(), // Fast local cache
rateLimit: new DynamoDBRateLimitStore({ ... }), // Shared rate limit
});