Recommended Usage
The recommended way to use HTTP Client Toolkit is to create a wrapper module per third-party API. Each exported function maps to an endpoint and owns its per-request configuration. Callers never think about caching, retries, or rate limiting — they just call the function.
Why a Wrapper?
Section titled “Why a Wrapper?”Without a wrapper, per-request config leaks into every call site:
// ❌ Config scattered across consumersawait client.get('https://api.github.com/users/octocat', { cache: { ttl: 120 }, retry: { maxRetries: 2 },});
// ... somewhere else in the codebaseawait client.get('https://api.github.com/users/octocat', { cache: { ttl: 300 }, // Inconsistent — which is right?});With a wrapper, config lives in one place:
// ✅ Consumers just call the functionconst user = await getUser('octocat');Store Setup
Section titled “Store Setup”Use a factory to create all stores with a shared database connection. This is a one-time setup that all your API wrappers import:
import { createSQLiteStores } from '@http-client-toolkit/store-sqlite';
export const stores = createSQLiteStores({ database: './app.db' });Each HttpClient that uses these stores gets its own private cache scope — keys are automatically prefixed with the client’s name, so entries from different clients never collide. See Scoping Behaviour for details.
Example: GitHub API Wrapper
Section titled “Example: GitHub API Wrapper”import { HttpClient } from '@http-client-toolkit/core';import { stores } from './stores.js';
// 1. Create one client per API with shared defaultsconst client = new HttpClient({ name: 'github', cache: { store: stores.cache, ttl: 300, overrides: { minimumTTL: 60 }, }, dedupe: stores.dedupe, rateLimit: { store: stores.rateLimit, defaultConfig: { limit: 100, windowMs: 60_000 }, }, retry: { maxRetries: 2 },});
// 2. Define response typesinterface GitHubUser { login: string; name: string;}
interface GitHubRepo { full_name: string; stargazers_count: number;}
// 3. Export one function per endpoint with per-request config
/** Short-lived data — lower TTL */export function getUser(username: string) { return client.get<GitHubUser>( `https://api.github.com/users/${username}`, { cache: { ttl: 120 } }, );}
/** Rarely changes — cache longer, skip no-store headers */export function getRepo(owner: string, repo: string) { return client.get<GitHubRepo>( `https://api.github.com/repos/${owner}/${repo}`, { cache: { ttl: 600, overrides: { ignoreNoStore: true } }, }, );}
/** Critical path — disable retries to fail fast */export function getRateLimit() { return client.get<{ rate: { remaining: number } }>( 'https://api.github.com/rate_limit', { retry: false, cache: { ttl: 10 } }, );}Consumers import and call:
import { getUser, getRepo } from './lib/github.js';
const user = await getUser('octocat');const repo = await getRepo('octocat', 'Hello-World');Exposing Request Options
Section titled “Exposing Request Options”For options that genuinely vary per call site (like signal for cancellation or priority for rate limiting), pass them through:
import { type RequestPriority } from '@http-client-toolkit/core';
interface GetUserOptions { signal?: AbortSignal; priority?: RequestPriority;}
export function getUser(username: string, options?: GetUserOptions) { return client.get<GitHubUser>( `https://api.github.com/users/${username}`, { cache: { ttl: 120 }, signal: options?.signal, priority: options?.priority, }, );}Keep caching and retry config out of this interface — those are endpoint concerns, not caller concerns.
Multiple APIs
Section titled “Multiple APIs”Create a separate module for each API. All clients import the same shared stores from lib/stores.ts — each gets its own cache scope automatically based on its name:
import { HttpClient } from '@http-client-toolkit/core';import { stores } from './stores.js';
export const stripeClient = new HttpClient({ name: 'stripe', cache: { store: stores.cache }, dedupe: stores.dedupe, rateLimit: { store: stores.rateLimit, defaultConfig: { limit: 25, windowMs: 1_000 }, },});Rate limits are tracked per resource (URL origin), so each API’s limits are naturally isolated. This keeps rate limiting isolated per origin, which matches how real APIs enforce their limits.
Scoping Behaviour
Section titled “Scoping Behaviour”All clients share one database and one set of stores, but cache entries are scoped privately by default. The github client’s keys are prefixed with github: and the stripe client’s keys with stripe:, so entries never collide.
| Concern | Default behaviour | How to change |
|---|---|---|
| Cache | Private — keys prefixed with client name (e.g. github:hash) | Set globalScope: true to share cache entries across clients |
| Dedup | Shared — uses the raw request hash with no client prefix | Dedup is always shared when clients use the same store, which is usually desirable |
| Rate limit | Shared by resource — tracked per URL origin, not per client | Use resourceExtractor to customise how resources are derived from URLs |
Sharing Cache Across Clients
Section titled “Sharing Cache Across Clients”To share cached data between clients — for example, if two clients access the same API and should reuse each other’s responses — set globalScope: true:
export const clientA = new HttpClient({ name: 'client-a', cache: { store: stores.cache, globalScope: true }, dedupe: stores.dedupe,});
export const clientB = new HttpClient({ name: 'client-b', cache: { store: stores.cache, globalScope: true }, dedupe: stores.dedupe,});With globalScope: true, cache keys are stored without a client name prefix, so both clients read from and write to the same cache entries.