Introduction
http-cache
is a library that acts as a middleware for caching HTTP responses. It is intended to be used by other libraries to support multiple HTTP clients and backend cache managers, though it does come with multiple optional manager implementations out of the box. http-cache
is built on top of http-cache-semantics
which parses HTTP headers to correctly compute cacheability of responses.
Key Features
- Traditional Caching: Standard HTTP response caching with full buffering
- Streaming Support: Memory-efficient caching for large responses without full buffering
- Cache-Aware Rate Limiting: Intelligent rate limiting that only applies on cache misses, not cache hits
- Multiple Backends: Support for disk-based (cacache) and in-memory (moka, quick-cache) storage
- Client Integrations: Support for reqwest, surf, tower, and ureq HTTP clients
- RFC 7234 Compliance: Proper HTTP cache semantics with respect for cache-control headers
Streaming vs Traditional Caching
The library supports two caching approaches:
- Traditional Caching (
CacheManager
trait): Buffers entire responses in memory before caching. Suitable for smaller responses and simpler use cases. - Streaming Caching (
StreamingCacheManager
trait): Processes responses as streams without full buffering. Ideal for large files, media content, or memory-constrained environments.
Cache Modes
When constructing a new instance of HttpCache
, you must specify a cache mode. The cache mode determines how the cache will behave in certain situations. These modes are similar to make-fetch-happen cache options. The available cache modes are:
-
Default
: This mode will inspect the HTTP cache on the way to the network. If there is a fresh response it will be used. If there is a stale response a conditional request will be created, and a normal request otherwise. It then updates the HTTP cache with the response. If the revalidation request fails (for example, on a 500 or if you're offline), the stale response will be returned. -
NoStore
: This mode will ignore the HTTP cache on the way to the network. It will always create a normal request, and will never cache the response. -
Reload
: This mode will ignore the HTTP cache on the way to the network. It will always create a normal request, and will update the HTTP cache with the response. -
NoCache
: This mode will create a conditional request if there is a response in the HTTP cache and a normal request otherwise. It then updates the HTTP cache with the response. -
ForceCache
: This mode will inspect the HTTP cache on the way to the network. If there is a cached response it will be used regardless of freshness. If there is no cached response it will create a normal request, and will update the cache with the response. -
OnlyIfCached
: This mode will inspect the HTTP cache on the way to the network. If there is a cached response it will be used regardless of freshness. If there is no cached response it will return a504 Gateway Timeout
error. -
IgnoreRules
: This mode will ignore the HTTP headers and always store a response given it was a 200 status code. It will also ignore the staleness when retrieving a response from the cache, so expiration of the cached response will need to be handled manually. If there was no cached response it will create a normal request, and will update the cache with the response.
Maximum TTL Control
When using cache modes like IgnoreRules
that bypass server cache headers, you can use the max_ttl
option to provide expiration control. This is particularly useful for preventing cached responses from persisting indefinitely.
Usage
The max_ttl
option accepts a Duration
and sets a maximum time-to-live for cached responses:
#![allow(unused)] fn main() { use http_cache::{HttpCacheOptions, CACacheManager, HttpCache, CacheMode}; use std::time::Duration; let manager = CACacheManager::new("./cache".into(), true); let options = HttpCacheOptions { max_ttl: Some(Duration::from_secs(300)), // 5 minutes maximum ..Default::default() }; let cache = HttpCache { mode: CacheMode::IgnoreRules, // Ignore server cache headers manager, options, }; }
Behavior
- Override longer durations: If the server specifies a longer cache duration (e.g.,
max-age=3600
),max_ttl
will reduce it to the specified limit - Respect shorter durations: If the server specifies a shorter duration (e.g.,
max-age=60
), the server's shorter duration will be used - Provide fallback duration: When using
IgnoreRules
mode where server headers are ignored,max_ttl
provides the cache duration
Examples
With IgnoreRules mode:
#![allow(unused)] fn main() { // Cache everything for 5 minutes, ignoring server headers let options = HttpCacheOptions { max_ttl: Some(Duration::from_secs(300)), ..Default::default() }; let cache = HttpCache { mode: CacheMode::IgnoreRules, manager, options, }; }
With Default mode:
#![allow(unused)] fn main() { // Respect server headers but limit cache duration to 1 hour maximum let options = HttpCacheOptions { max_ttl: Some(Duration::from_hours(1)), ..Default::default() }; let cache = HttpCache { mode: CacheMode::Default, manager, options, }; }
Content-Type Based Caching
You can implement selective caching based on response content types using the response_cache_mode_fn
option. This allows you to cache only certain types of content while avoiding others.
Basic Content-Type Filtering
#![allow(unused)] fn main() { use http_cache::{HttpCacheOptions, CACacheManager, HttpCache, CacheMode}; use std::sync::Arc; let manager = CACacheManager::new("./cache".into(), true); let options = HttpCacheOptions { response_cache_mode_fn: Some(Arc::new(|_request_parts, response| { // Check the Content-Type header to decide caching behavior if let Some(content_type) = response.headers.get("content-type") { match content_type.to_str().unwrap_or("") { // Cache JSON APIs aggressively (ignore no-cache headers) ct if ct.starts_with("application/json") => Some(CacheMode::ForceCache), // Cache images with default HTTP caching rules ct if ct.starts_with("image/") => Some(CacheMode::Default), // Cache static assets aggressively ct if ct.starts_with("text/css") => Some(CacheMode::ForceCache), ct if ct.starts_with("application/javascript") => Some(CacheMode::ForceCache), // Don't cache HTML pages (often dynamic) ct if ct.starts_with("text/html") => Some(CacheMode::NoStore), // Don't cache unknown content types _ => Some(CacheMode::NoStore), } } else { // No Content-Type header - don't cache for safety Some(CacheMode::NoStore) } })), ..Default::default() }; let cache = HttpCache { mode: CacheMode::Default, // This gets overridden by response_cache_mode_fn manager, options, }; }
Advanced Content-Type Strategies
For more complex scenarios, you can combine content-type checking with other response properties:
#![allow(unused)] fn main() { use http_cache::{HttpCacheOptions, CACacheManager, HttpCache, CacheMode}; use std::sync::Arc; use std::time::Duration; let manager = CACacheManager::new("./cache".into(), true); let options = HttpCacheOptions { response_cache_mode_fn: Some(Arc::new(|request_parts, response| { // Get content type let content_type = response.headers .get("content-type") .and_then(|ct| ct.to_str().ok()) .unwrap_or(""); // Get URL path for additional context let path = request_parts.uri.path(); match content_type { // API responses ct if ct.starts_with("application/json") => { if path.starts_with("/api/") { // Cache API responses, but respect server headers Some(CacheMode::Default) } else { // Force cache non-API JSON (like config files) Some(CacheMode::ForceCache) } }, // Static assets ct if ct.starts_with("text/css") || ct.starts_with("application/javascript") => { Some(CacheMode::ForceCache) }, // Images ct if ct.starts_with("image/") => { if response.status == 200 { Some(CacheMode::ForceCache) } else { Some(CacheMode::NoStore) // Don't cache error images } }, // HTML ct if ct.starts_with("text/html") => { if path.starts_with("/static/") { Some(CacheMode::Default) // Static HTML can be cached } else { Some(CacheMode::NoStore) // Dynamic HTML shouldn't be cached } }, // Everything else _ => Some(CacheMode::NoStore), } })), // Limit cache duration to 1 hour max max_ttl: Some(Duration::from_secs(3600)), ..Default::default() }; let cache = HttpCache { mode: CacheMode::Default, manager, options, }; }
Common Content-Type Patterns
Here are some common content-type based caching strategies:
Static Assets (Aggressive Caching):
text/css
- CSS stylesheetsapplication/javascript
- JavaScript filesimage/*
- All image typesfont/*
- Web fonts
API Responses (Conditional Caching):
application/json
- JSON APIsapplication/xml
- XML APIstext/plain
- Plain text responses
Dynamic Content (No Caching):
text/html
- HTML pages (usually dynamic)application/x-www-form-urlencoded
- Form submissions
Combining with Other Options
Content-type based caching works well with other cache options:
#![allow(unused)] fn main() { use http_cache::{HttpCacheOptions, CACacheManager, HttpCache, CacheMode}; use std::sync::Arc; use std::time::Duration; let options = HttpCacheOptions { // Content-type based mode selection response_cache_mode_fn: Some(Arc::new(|_req, response| { match response.headers.get("content-type")?.to_str().ok()? { ct if ct.starts_with("application/json") => Some(CacheMode::ForceCache), ct if ct.starts_with("image/") => Some(CacheMode::Default), _ => Some(CacheMode::NoStore), } })), // Custom cache keys for better organization cache_key: Some(Arc::new(|req| { format!("{}:{}:{}", req.method, req.uri.host().unwrap_or(""), req.uri.path()) })), // Maximum cache duration max_ttl: Some(Duration::from_secs(1800)), // 30 minutes // Add cache status headers for debugging cache_status_headers: true, ..Default::default() }; }
This approach gives you fine-grained control over what gets cached based on the actual content type returned by the server.
Complete Per-Request Customization
The HTTP cache library provides comprehensive per-request customization capabilities for cache keys, cache options, and cache modes. Here's a complete example showing all features:
#![allow(unused)] fn main() { use http_cache::{HttpCacheOptions, CACacheManager, HttpCache, CacheMode}; use std::sync::Arc; use std::time::Duration; let manager = CACacheManager::new("./cache".into(), true); let options = HttpCacheOptions { // 1. Configure cache keys when initializing (per-request cache key override) cache_key: Some(Arc::new(|req: &http::request::Parts| { // Generate different cache keys based on request properties let path = req.uri.path(); let query = req.uri.query().unwrap_or(""); match path { // API endpoints: include user context in cache key p if p.starts_with("/api/") => { if let Some(auth) = req.headers.get("authorization") { format!("api:{}:{}:{}:authenticated", req.method, path, query) } else { format!("api:{}:{}:{}:anonymous", req.method, path, query) } }, // Static assets: simple cache key p if p.starts_with("/static/") => { format!("static:{}:{}", req.method, req.uri) }, // Dynamic pages: include important headers _ => { let accept_lang = req.headers.get("accept-language") .and_then(|h| h.to_str().ok()) .unwrap_or("en"); format!("page:{}:{}:{}:{}", req.method, path, query, accept_lang) } } })), // 2. Override cache options on a per-request basis (request-based cache mode) cache_mode_fn: Some(Arc::new(|req: &http::request::Parts| { let path = req.uri.path(); // Admin endpoints: never cache if path.starts_with("/admin/") { return CacheMode::NoStore; } // Check for cache control headers from client if req.headers.contains_key("x-no-cache") { return CacheMode::NoStore; } // Development mode: bypass cache if req.headers.get("x-env").and_then(|h| h.to_str().ok()) == Some("development") { return CacheMode::Reload; } // Static assets: force cache if path.starts_with("/static/") || path.ends_with(".css") || path.ends_with(".js") { return CacheMode::ForceCache; } // Default behavior for everything else CacheMode::Default })), // 3. Additional per-response cache override (response-based cache mode) response_cache_mode_fn: Some(Arc::new(|req: &http::request::Parts, response| { // Override cache behavior based on response content and status // Never cache error responses if response.status >= 400 { return Some(CacheMode::NoStore); } // Content-type based caching if let Some(content_type) = response.headers.get("content-type") { match content_type.to_str().unwrap_or("") { // Force cache JSON APIs even with no-cache headers ct if ct.starts_with("application/json") => Some(CacheMode::ForceCache), // Don't cache HTML in development ct if ct.starts_with("text/html") => { if req.headers.get("x-env").and_then(|h| h.to_str().ok()) == Some("development") { Some(CacheMode::NoStore) } else { None // Use default behavior } }, _ => None, } } else { None } })), // Cache busting for related resources cache_bust: Some(Arc::new(|req: &http::request::Parts, _cache_key_fn, current_key| { let path = req.uri.path(); // When updating user data, bust user-specific caches if req.method == "POST" && path.starts_with("/api/users/") { if let Some(user_id) = path.strip_prefix("/api/users/").and_then(|s| s.split('/').next()) { return vec![ format!("api:GET:/api/users/{}:authenticated", user_id), format!("api:GET:/api/users/{}:anonymous", user_id), format!("api:GET:/api/users:authenticated"), ]; } } vec![] // No cache busting by default })), // Global cache duration limit max_ttl: Some(Duration::from_hours(24)), // Enable cache status headers for debugging cache_status_headers: true, ..Default::default() }; let cache = HttpCache { mode: CacheMode::Default, // Can be overridden by cache_mode_fn and response_cache_mode_fn manager, options, }; }
Key Capabilities Summary
- Custom Cache Keys: The
cache_key
function runs for every request, allowing complete customization of cache keys based on any request property - Request-Based Cache Mode Override: The
cache_mode_fn
allows overriding cache behavior based on request properties (headers, path, method, etc.) - Response-Based Cache Mode Override: The
response_cache_mode_fn
allows overriding cache behavior based on both request and response data - Cache Busting: The
cache_bust
function allows invalidating related cache entries - Global Settings: Options like
max_ttl
andcache_status_headers
provide global configuration
All of these functions are called on a per-request basis, giving you complete control over caching behavior for each individual request.
Rate Limiting
The http-cache library provides built-in cache-aware rate limiting functionality that only applies when making actual network requests (cache misses), not when serving responses from cache (cache hits).
This feature is available behind the rate-limiting
feature flag and provides an elegant solution for scraping scenarios where you want to cache responses to avoid rate limits, but still need to respect rate limits for new requests.
How It Works
The rate limiting follows this flow:
- Check cache first - The cache is checked for an existing response
- If cache hit - Return the cached response immediately (no rate limiting applied)
- If cache miss - Apply rate limiting before making the network request
- Make network request - Fetch from the remote server after rate limiting
- Cache and return - Store the response and return it
This ensures that:
- Cached responses are served instantly without any rate limiting delays
- Only actual network requests are rate limited
- Multiple cache hits can be served concurrently without waiting
Rate Limiting Strategies
DomainRateLimiter
Applies rate limiting per domain, allowing different rate limits for different hosts:
#![allow(unused)] fn main() { use http_cache::rate_limiting::{DomainRateLimiter, Quota}; use std::num::NonZeroU32; use std::sync::Arc; // Allow 10 requests per second per domain let quota = Quota::per_second(NonZeroU32::new(10).unwrap()); let rate_limiter = Arc::new(DomainRateLimiter::new(quota)); }
DirectRateLimiter
Applies a global rate limit across all requests regardless of domain:
#![allow(unused)] fn main() { use http_cache::rate_limiting::{DirectRateLimiter, Quota}; use std::num::NonZeroU32; use std::sync::Arc; // Allow 5 requests per second globally let quota = Quota::per_second(NonZeroU32::new(5).unwrap()); let rate_limiter = Arc::new(DirectRateLimiter::direct(quota)); }
Custom Rate Limiters
You can implement your own rate limiting strategy by implementing the CacheAwareRateLimiter
trait:
#![allow(unused)] fn main() { use http_cache::rate_limiting::CacheAwareRateLimiter; use async_trait::async_trait; struct CustomRateLimiter { // Your custom rate limiting logic } #[async_trait] impl CacheAwareRateLimiter for CustomRateLimiter { async fn until_key_ready(&self, key: &str) { // Implement your rate limiting logic here // This method should block until it's safe to make a request } fn check_key(&self, key: &str) -> bool { // Return true if a request can be made immediately // Return false if rate limiting would apply true } } }
Configuration
Rate limiting is configured through the HttpCacheOptions
struct:
#![allow(unused)] fn main() { use http_cache::{HttpCache, HttpCacheOptions, CacheMode}; use http_cache::rate_limiting::{DomainRateLimiter, Quota}; use std::sync::Arc; let quota = Quota::per_second(std::num::NonZeroU32::new(10).unwrap()); let rate_limiter = Arc::new(DomainRateLimiter::new(quota)); let cache = HttpCache { mode: CacheMode::Default, manager: your_cache_manager, options: HttpCacheOptions { rate_limiter: Some(rate_limiter), ..Default::default() }, }; }
Client-Specific Examples
reqwest
use http_cache_reqwest::{Cache, HttpCache, CACacheManager, CacheMode, HttpCacheOptions}; use http_cache_reqwest::{DomainRateLimiter, Quota}; use reqwest_middleware::ClientBuilder; use std::sync::Arc; #[tokio::main] async fn main() -> Result<(), Box<dyn std::error::Error>> { let quota = Quota::per_second(std::num::NonZeroU32::new(5).unwrap()); let rate_limiter = Arc::new(DomainRateLimiter::new(quota)); let client = ClientBuilder::new(reqwest::Client::new()) .with(Cache(HttpCache { mode: CacheMode::Default, manager: CACacheManager::new("./cache".into(), true), options: HttpCacheOptions { rate_limiter: Some(rate_limiter), ..Default::default() }, })) .build(); // First request - will be rate limited and cached let resp1 = client.get("https://httpbin.org/delay/1").send().await?; println!("First response: {}", resp1.status()); // Second identical request - served from cache, no rate limiting let resp2 = client.get("https://httpbin.org/delay/1").send().await?; println!("Second response: {}", resp2.status()); Ok(()) }
surf
use http_cache_surf::{Cache, HttpCache, CACacheManager, CacheMode, HttpCacheOptions}; use http_cache_surf::{DomainRateLimiter, Quota}; use surf::Client; use std::sync::Arc; use macro_rules_attribute::apply; use smol_macros::main; #[apply(main!)] async fn main() -> surf::Result<()> { let quota = Quota::per_second(std::num::NonZeroU32::new(5).unwrap()); let rate_limiter = Arc::new(DomainRateLimiter::new(quota)); let client = Client::new() .with(Cache(HttpCache { mode: CacheMode::Default, manager: CACacheManager::new("./cache".into(), true), options: HttpCacheOptions { rate_limiter: Some(rate_limiter), ..Default::default() }, })); // Requests will be rate limited on cache misses only let mut resp1 = client.get("https://httpbin.org/delay/1").await?; println!("First response: {}", resp1.body_string().await?); let mut resp2 = client.get("https://httpbin.org/delay/1").await?; println!("Second response: {}", resp2.body_string().await?); Ok(()) }
tower
use http_cache_tower::{HttpCacheLayer, CACacheManager}; use http_cache::{CacheMode, HttpCache, HttpCacheOptions}; use http_cache_tower::{DomainRateLimiter, Quota}; use tower::ServiceBuilder; use std::sync::Arc; #[tokio::main] async fn main() { let quota = Quota::per_second(std::num::NonZeroU32::new(5).unwrap()); let rate_limiter = Arc::new(DomainRateLimiter::new(quota)); let cache = HttpCache { mode: CacheMode::Default, manager: CACacheManager::new("./cache".into(), true), options: HttpCacheOptions { rate_limiter: Some(rate_limiter), ..Default::default() }, }; let service = ServiceBuilder::new() .layer(HttpCacheLayer::with_cache(cache)) .service_fn(your_service_function); // Use the service - rate limiting will be applied on cache misses }
ureq
use http_cache_ureq::{CachedAgent, CACacheManager, CacheMode, HttpCacheOptions}; use http_cache_ureq::{DomainRateLimiter, Quota}; use std::sync::Arc; fn main() -> Result<(), Box<dyn std::error::Error>> { smol::block_on(async { let quota = Quota::per_second(std::num::NonZeroU32::new(5).unwrap()); let rate_limiter = Arc::new(DomainRateLimiter::new(quota)); let agent = CachedAgent::builder() .cache_manager(CACacheManager::new("./cache".into(), true)) .cache_mode(CacheMode::Default) .cache_options(HttpCacheOptions { rate_limiter: Some(rate_limiter), ..Default::default() }) .build()?; // Rate limiting applies only on cache misses let response1 = agent.get("https://httpbin.org/delay/1").call().await?; println!("First response: {}", response1.status()); let response2 = agent.get("https://httpbin.org/delay/1").call().await?; println!("Second response: {}", response2.status()); Ok(()) }) }
Use Cases
This cache-aware rate limiting is particularly useful for:
- Web scraping - Cache responses to avoid repeated requests while respecting rate limits for new content
- API clients - Improve performance with caching while staying within API rate limits
- Data collection - Efficiently gather data without overwhelming servers
- Development and testing - Reduce API calls during development while maintaining realistic rate limiting behavior
Streaming Support
Rate limiting works seamlessly with streaming cache operations. When using streaming managers or streaming middleware, rate limiting is applied in the same cache-aware manner:
Streaming Cache Examples
reqwest Streaming with Rate Limiting
use http_cache_reqwest::{StreamingCache, HttpCacheOptions}; use http_cache::{StreamingManager, CacheMode}; use http_cache_reqwest::{DomainRateLimiter, Quota}; use reqwest_middleware::ClientBuilder; use std::sync::Arc; #[tokio::main] async fn main() -> Result<(), Box<dyn std::error::Error>> { let quota = Quota::per_second(std::num::NonZeroU32::new(2).unwrap()); let rate_limiter = Arc::new(DomainRateLimiter::new(quota)); let streaming_manager = StreamingManager::new("./streaming-cache".into()); let client = ClientBuilder::new(reqwest::Client::new()) .with(StreamingCache::with_options( streaming_manager, CacheMode::Default, HttpCacheOptions { rate_limiter: Some(rate_limiter), ..Default::default() } )) .build(); // First request - rate limited and cached as streaming let resp1 = client.get("https://httpbin.org/stream-bytes/10000").send().await?; println!("First streaming response: {}", resp1.status()); // Second request - served from streaming cache, no rate limiting let resp2 = client.get("https://httpbin.org/stream-bytes/10000").send().await?; println!("Second streaming response: {}", resp2.status()); Ok(()) }
tower Streaming with Rate Limiting
use http_cache_tower::{HttpCacheStreamingLayer}; use http_cache::{StreamingManager, CacheMode, HttpCacheOptions}; use http_cache_tower::{DomainRateLimiter, Quota}; use tower::ServiceBuilder; use std::sync::Arc; #[tokio::main] async fn main() { let quota = Quota::per_second(std::num::NonZeroU32::new(3).unwrap()); let rate_limiter = Arc::new(DomainRateLimiter::new(quota)); let streaming_manager = StreamingManager::new("./streaming-cache".into()); let layer = HttpCacheStreamingLayer::with_options( streaming_manager, HttpCacheOptions { rate_limiter: Some(rate_limiter), ..Default::default() } ); let service = ServiceBuilder::new() .layer(layer) .service_fn(your_streaming_service_function); // Streaming responses will be rate limited on cache misses only }
Streaming Rate Limiting Benefits
When using streaming with rate limiting:
- Memory efficiency - Large responses are streamed without full buffering
- Cache-aware rate limiting - Rate limits only apply to actual network requests, not streaming from cache
- Concurrent streaming - Multiple cached streams can be served simultaneously without rate limiting delays
- Efficient large file handling - Perfect for scenarios involving large files or media content
Performance Benefits
By only applying rate limiting on cache misses, you get:
- Instant cache hits - No rate limiting delays for cached responses
- Concurrent cache serving - Multiple cache hits can be served simultaneously
- Efficient scraping - Re-scraping cached content doesn't count against rate limits
- Better user experience - Faster response times for frequently accessed resources
- Streaming optimization - Large cached responses stream immediately without rate limiting overhead
Development
http-cache
is meant to be extended to support multiple HTTP clients and backend cache managers. A CacheManager
trait has been provided to help ease support for new backend cache managers. For memory-efficient handling of large responses, a StreamingCacheManager
trait is also available. Similarly, a Middleware
trait has been provided to help ease supporting new HTTP clients.
Supporting a Backend Cache Manager
This section is intended for those looking to implement a custom backend cache manager, or understand how the CacheManager
and StreamingCacheManager
traits work.
Supporting an HTTP Client
This section is intended for those looking to implement a custom HTTP client, or understand how the Middleware
trait works.
Supporting a Backend Cache Manager
This section is intended for those looking to implement a custom backend cache manager, or understand how the CacheManager
and StreamingCacheManager
traits work.
The CacheManager
trait
The CacheManager
trait is the main trait that needs to be implemented to support a new backend cache manager. It has three methods that it requires:
get
: retrieve a cached response given the provided cache keyput
: store a response and related policy object in the cache associated with the provided cache keydelete
: remove a cached response from the cache associated with the provided cache key
Because the methods are asynchronous, they currently require async_trait
to be derived. This may change in the future.
The get
method
The get
method is used to retrieve a cached response given the provided cache key. It returns an Result<Option<(HttpResponse, CachePolicy)>, BoxError>
where HttpResponse
is the cached response and CachePolicy
is the associated cache policy object that provides us helpful metadata. If the cache key does not exist in the cache, Ok(None)
is returned.
The put
method
The put
method is used to store a response and related policy object in the cache associated with the provided cache key. It returns an Result<HttpResponse, BoxError>
where HttpResponse
is the passed response.
The delete
method
The delete
method is used to remove a cached response from the cache associated with the provided cache key. It returns an Result<(), BoxError>
.
The StreamingCacheManager
trait
The StreamingCacheManager
trait extends the traditional CacheManager
to support streaming operations for memory-efficient handling of large responses. It includes all the methods from CacheManager
plus additional streaming-specific methods:
get_stream
: retrieve a cached response as a stream given the provided cache keyput_stream
: store a streaming response in the cache associated with the provided cache keystream_response
: create a streaming response body from cached data
The streaming approach is particularly useful for large responses where you don't want to buffer the entire response body in memory.
How to implement a custom backend cache manager
This guide shows examples of implementing both traditional and streaming cache managers. We'll use the CACacheManager
as an example of implementing the CacheManager
trait for traditional disk-based caching, and the StreamingManager
as an example of implementing the StreamingManager
trait for streaming support that stores response metadata and body content separately to enable memory-efficient handling of large responses. There are several ways to accomplish this, so feel free to experiment!
Part One: The base structs
We'll show the base structs for both traditional and streaming cache managers.
For traditional caching, we'll use a simple struct that stores the cache directory path:
#![allow(unused)] fn main() { /// Traditional cache manager using cacache for disk-based storage #[derive(Debug, Clone)] pub struct CACacheManager { /// Directory where the cache will be stored. pub path: PathBuf, /// Options for removing cache entries. pub remove_opts: cacache::RemoveOpts, } }
For streaming caching, we'll use a struct that stores the root path for the cache directory and organizes content separately:
#![allow(unused)] fn main() { /// File-based streaming cache manager #[derive(Debug, Clone)] pub struct StreamingManager { root_path: PathBuf, ref_counter: ContentRefCounter, config: StreamingCacheConfig, } }
The StreamingManager
follows a "simple and reliable" design philosophy:
- Focused functionality: Core streaming operations without unnecessary complexity
- Simple configuration: Minimal options with sensible defaults
- Predictable behavior: Straightforward LRU eviction and error handling
- Easy maintenance: Clean code paths for debugging and troubleshooting
This approach prioritizes maintainability and reliability over feature completeness, making it easier to understand, debug, and extend.
For traditional caching, we use a simple Store
struct that contains both the response and policy together:
#![allow(unused)] fn main() { /// Store struct for traditional caching #[derive(Debug, Deserialize, Serialize)] struct Store { response: HttpResponse, policy: CachePolicy, } }
For streaming caching, we create a metadata struct that stores response information separately from the content:
#![allow(unused)] fn main() { /// Metadata stored for each cached response #[derive(Debug, Clone, Serialize, Deserialize)] pub struct CacheMetadata { pub status: u16, pub version: u8, pub headers: HashMap<String, String>, pub content_digest: String, pub policy: CachePolicy, pub created_at: u64, } }
This struct derives serde Deserialize and Serialize to ease the serialization and deserialization with JSON for the streaming metadata, and bincode for the traditional Store struct.
Part Two: Implementing the traditional CacheManager
trait
For traditional caching that stores entire response bodies, you implement just the CacheManager
trait. Here's the CACacheManager
implementation using the cacache
library:
#![allow(unused)] fn main() { impl CACacheManager { /// Creates a new CACacheManager with the given path. pub fn new(path: PathBuf, remove_fully: bool) -> Self { Self { path, remove_opts: cacache::RemoveOpts::new().remove_fully(remove_fully), } } } #[async_trait::async_trait] impl CacheManager for CACacheManager { async fn get( &self, cache_key: &str, ) -> Result<Option<(HttpResponse, CachePolicy)>> { let store: Store = match cacache::read(&self.path, cache_key).await { Ok(d) => bincode::deserialize(&d)?, Err(_e) => { return Ok(None); } }; Ok(Some((store.response, store.policy))) } async fn put( &self, cache_key: String, response: HttpResponse, policy: CachePolicy, ) -> Result<HttpResponse> { let data = Store { response, policy }; let bytes = bincode::serialize(&data)?; cacache::write(&self.path, cache_key, bytes).await?; Ok(data.response) } async fn delete(&self, cache_key: &str) -> Result<()> { self.remove_opts.clone().remove(&self.path, cache_key).await?; Ok(()) } } }
Part Three: Implementing the StreamingCacheManager
trait
For streaming caching that handles large responses without buffering them entirely in memory, you implement the StreamingCacheManager
trait. The StreamingCacheManager
trait extends CacheManager
with streaming-specific methods. We'll start with the implementation signature, but first we must make sure we derive async_trait.
#![allow(unused)] fn main() { #[async_trait::async_trait] impl StreamingCacheManager for StreamingManager { type Body = StreamingBody<Empty<Bytes>>; ... }
Helper methods
First, let's implement some helper methods that our cache will need:
#![allow(unused)] fn main() { impl StreamingManager { /// Create a new streaming cache manager with default configuration pub fn new(root_path: PathBuf) -> Self { Self::new_with_config(root_path, StreamingCacheConfig::default()) } /// Create a new streaming cache manager with custom configuration pub fn new_with_config( root_path: PathBuf, config: StreamingCacheConfig, ) -> Self { Self { root_path, ref_counter: ContentRefCounter::new(), config } } /// Get the path for storing metadata fn metadata_path(&self, key: &str) -> PathBuf { let encoded_key = hex::encode(key.as_bytes()); self.root_path .join("cache-v2") .join("metadata") .join(format!("{encoded_key}.json")) } /// Get the path for storing content fn content_path(&self, digest: &str) -> PathBuf { self.root_path.join("cache-v2").join("content").join(digest) } /// Calculate SHA256 digest of content fn calculate_digest(content: &[u8]) -> String { let mut hasher = Sha256::new(); hasher.update(content); hex::encode(hasher.finalize()) } } }
The streaming get
method
The get
method accepts a &str
as the cache key and returns a Result<Option<(Response<Self::Body>, CachePolicy)>>
. This method reads the metadata file to get response information, then creates a streaming body that reads directly from the cached content file without loading it into memory.
#![allow(unused)] fn main() { async fn get( &self, cache_key: &str, ) -> Result<Option<(Response<Self::Body>, CachePolicy)>> { let metadata_path = self.metadata_path(cache_key); // Check if metadata file exists if !metadata_path.exists() { return Ok(None); } // Read and parse metadata let metadata_content = tokio::fs::read(&metadata_path).await?; let metadata: CacheMetadata = serde_json::from_slice(&metadata_content)?; // Check if content file exists let content_path = self.content_path(&metadata.content_digest); if !content_path.exists() { return Ok(None); } // Open content file for streaming let file = tokio::fs::File::open(&content_path).await?; // Build response with streaming body let mut response_builder = Response::builder() .status(metadata.status) .version(/* convert from metadata.version */); // Add headers for (name, value) in &metadata.headers { if let (Ok(header_name), Ok(header_value)) = ( name.parse::<http::HeaderName>(), value.parse::<http::HeaderValue>(), ) { response_builder = response_builder.header(header_name, header_value); } } // Create streaming body from file let body = StreamingBody::from_file(file); let response = response_builder.body(body)?; Ok(Some((response, metadata.policy))) } }
The streaming put
method
The put
method accepts a String
as the cache key, a streaming Response<B>
, a CachePolicy
, and a request URL. It stores the response body content in a file and the metadata separately, enabling efficient retrieval without loading the entire response into memory.
#![allow(unused)] fn main() { async fn put<B>( &self, cache_key: String, response: Response<B>, policy: CachePolicy, _request_url: Url, ) -> Result<Response<Self::Body>> where B: http_body::Body + Send + 'static, B::Data: Send, B::Error: Into<StreamingError>, { let (parts, body) = response.into_parts(); // Collect body content let collected = body.collect().await?; let body_bytes = collected.to_bytes(); // Calculate content digest for deduplication let content_digest = Self::calculate_digest(&body_bytes); let content_path = self.content_path(&content_digest); // Ensure content directory exists and write content if not already present if !content_path.exists() { if let Some(parent) = content_path.parent() { tokio::fs::create_dir_all(parent).await?; } tokio::fs::write(&content_path, &body_bytes).await?; } // Create metadata let metadata = CacheMetadata { status: parts.status.as_u16(), version: match parts.version { Version::HTTP_11 => 11, Version::HTTP_2 => 2, // ... other versions _ => 11, }, headers: parts.headers.iter() .map(|(name, value)| { (name.to_string(), value.to_str().unwrap_or("").to_string()) }) .collect(), content_digest: content_digest.clone(), policy, created_at: std::time::SystemTime::now() .duration_since(std::time::UNIX_EPOCH) .unwrap_or_default() .as_secs(), }; // Write metadata let metadata_path = self.metadata_path(&cache_key); if let Some(parent) = metadata_path.parent() { tokio::fs::create_dir_all(parent).await?; } let metadata_json = serde_json::to_vec(&metadata)?; tokio::fs::write(&metadata_path, &metadata_json).await?; // Return response with buffered body for immediate use let response = Response::from_parts(parts, StreamingBody::buffered(body_bytes)); Ok(response) } }
The streaming delete
method
The delete
method accepts a &str
as the cache key. It removes both the metadata file and the associated content file from the cache directory.
#![allow(unused)] fn main() { async fn delete(&self, cache_key: &str) -> Result<()> { let metadata_path = self.metadata_path(cache_key); // Read metadata to get content digest if let Ok(metadata_content) = tokio::fs::read(&metadata_path).await { if let Ok(metadata) = serde_json::from_slice::<CacheMetadata>(&metadata_content) { let content_path = self.content_path(&metadata.content_digest); // Remove content file tokio::fs::remove_file(&content_path).await.ok(); } } // Remove metadata file tokio::fs::remove_file(&metadata_path).await.ok(); Ok(()) } }
Our StreamingManager
struct now meets the requirements of both the CacheManager
and StreamingCacheManager
traits and provides streaming support without buffering large response bodies in memory!
Supporting an HTTP Client
This section is intended for those who wish to add support for a new HTTP client to http-cache
, or understand how the Middleware
trait works. If you are looking to use http-cache
with an HTTP client that is already supported, please see the Client Implementations section.
The ecosystem supports both traditional caching (where entire response bodies are buffered) and streaming caching (for memory-efficient handling of large responses). The Tower implementation provides the most comprehensive streaming support.
The Middleware
trait
The Middleware
trait is the main trait that needs to be implemented to add support for a new HTTP client. It has nine methods that it requires:
is_method_get_head
: returnstrue
if the method of the request isGET
orHEAD
,false
otherwisepolicy
: returns aCachePolicy
with default options for the givenHttpResponse
policy_with_options
: returns aCachePolicy
with the providedCacheOptions
for the givenHttpResponse
update_headers
: updates the request headers with the providedhttp::request::Parts
force_no_cache
: overrides theCache-Control
header to 'no-cache' derectiveparts
: returns thehttp::request::Parts
from the requesturl
: returns the requestedUrl
method
: returns the method of the request as aString
remote_fetch
: performs the request and returns theHttpResponse
Because the remote_fetch
method is asynchronous, it currently requires async_trait
to be derived. This may change in the future.
The is_method_get_head
method
The is_method_get_head
method is used to determine if the method of the request is GET
or HEAD
. It returns a bool
where true
indicates the method is GET
or HEAD
, and false
if otherwise.
The policy
and policy_with_options
methods
The policy
method is used to generate the cache policy for the given HttpResponse
. It returns a CachePolicy
with default options.
The policy_with_options
method is used to generate the cache policy for the given HttpResponse
with the provided CacheOptions
. It returns a CachePolicy
.
The update_headers
method
The update_headers
method is used to update the request headers with the provided http::request::Parts
.
The force_no_cache
method
The force_no_cache
method is used to override the Cache-Control
header to 'no-cache' derective. This is used to allow caching but force revalidation before resuse.
The parts
method
The parts
method is used to return the http::request::Parts
from the request which eases working with the http_cache_semantics
crate.
The url
method
The url
method is used to return the requested Url
in a standard format.
The method
method
The method
method is used to return the HTTP method of the request as a String
to standardize the format.
The remote_fetch
method
The remote_fetch
method is used to perform the request and return the HttpResponse
. This goal here is to abstract away the HTTP client implementation and return a more generic response type.
How to implement a custom HTTP client
This guide will use the surf
HTTP client as an example. The full source can be found here. There are several ways to accomplish this, so feel free to experiment!
Part One: The base structs
First we will create a wrapper for the HttpCache
struct. This is required because we cannot implement a trait for a type declared in another crate, see docs for more info. We will call it Cache
in this case.
#![allow(unused)] fn main() { #[derive(Debug)] pub struct Cache<T: CacheManager>(pub HttpCache<T>); }
Next we will create a struct to store the request and anything else we will need for our surf::Middleware
implementation (more on that later). This struct will also implement the http-cache Middleware
trait. We'll call it SurfMiddleware
in this case.
#![allow(unused)] fn main() { pub(crate) struct SurfMiddleware<'a> { pub req: Request, pub client: Client, pub next: Next<'a>, } }
Part Two: Implementing the Middleware
trait
Now that we have our base structs, we can implement the Middleware
trait for our SurfMiddleware
struct. We'll start with the is_method_get_head
method, but first we must make sure we derive async_trait.
#![allow(unused)] fn main() { #[async_trait::async_trait] impl Middleware for SurfMiddleware<'_> { ... }
The is_method_get_head
will check the request stored in our SurfMiddleware
struct and return true
if the method is GET
or HEAD
, false
otherwise.
#![allow(unused)] fn main() { fn is_method_get_head(&self) -> bool { self.req.method() == Method::Get || self.req.method() == Method::Head } }
Next we'll implement the policy
method. This method accepts a reference to the HttpResponse
and returns a CachePolicy
with default options. We'll use the http_cache_semantics::CachePolicy::new
method to generate the policy.
#![allow(unused)] fn main() { fn policy(&self, response: &HttpResponse) -> Result<CachePolicy> { Ok(CachePolicy::new(&self.parts()?, &response.parts()?)) } }
The policy_with_options
method is similar to the policy
method, but accepts a CacheOptions
struct to override the default options. We'll use the http_cache_semantics::CachePolicy::new_options
method to generate the policy.
#![allow(unused)] fn main() { fn policy_with_options( &self, response: &HttpResponse, options: CacheOptions, ) -> Result<CachePolicy> { Ok(CachePolicy::new_options( &self.parts()?, &response.parts()?, SystemTime::now(), options, )) } }
Next we'll implement the update_headers
method. This method accepts a reference to the http::request::Parts
and updates the request headers. We will iterate over the part headers and attempt to convert the header value to a HeaderValue
and set the header on the request. If the conversion fails, we will return an error.
#![allow(unused)] fn main() { fn update_headers(&mut self, parts: &Parts) -> Result<()> { for header in parts.headers.iter() { let value = match HeaderValue::from_str(header.1.to_str()?) { Ok(v) => v, Err(_e) => return Err(Box::new(BadHeader)), }; self.req.set_header(header.0.as_str(), value); } Ok(()) } }
The force_no_cache
method is used to override the Cache-Control
header in the request to 'no-cache' derective. This is used to allow caching but force revalidation before resuse.
#![allow(unused)] fn main() { fn force_no_cache(&mut self) -> Result<()> { self.req.insert_header(CACHE_CONTROL.as_str(), "no-cache"); Ok(()) } }
The parts
method is used to return the http::request::Parts
from the request which eases working with the http_cache_semantics
crate.
#![allow(unused)] fn main() { fn parts(&self) -> Result<Parts> { let mut converted = request::Builder::new() .method(self.req.method().as_ref()) .uri(self.req.url().as_str()) .body(())?; { let headers = converted.headers_mut(); for header in self.req.iter() { headers.insert( http::header::HeaderName::from_str(header.0.as_str())?, http::HeaderValue::from_str(header.1.as_str())?, ); } } Ok(converted.into_parts().0) } }
The url
method is used to return the requested Url
in a standard format.
#![allow(unused)] fn main() { fn url(&self) -> Result<Url> { Ok(self.req.url().clone()) } }
The method
method is used to return the HTTP method of the request as a String
to standardize the format.
#![allow(unused)] fn main() { fn method(&self) -> Result<String> { Ok(self.req.method().as_ref().to_string()) } }
Finally, the remote_fetch
method is used to perform the request and return the HttpResponse
.
#![allow(unused)] fn main() { async fn remote_fetch(&mut self) -> Result<HttpResponse> { let url = self.req.url().clone(); let mut res = self.next.run(self.req.clone(), self.client.clone()).await?; let mut headers = HashMap::new(); for header in res.iter() { headers.insert( header.0.as_str().to_owned(), header.1.as_str().to_owned(), ); } let status = res.status().into(); let version = res.version().unwrap_or(Version::Http1_1); let body: Vec<u8> = res.body_bytes().await?; Ok(HttpResponse { body, headers, status, url, version: version.try_into()?, }) } }
Our SurfMiddleware
struct now meets the requirements of the Middleware
trait. We can now implement the surf::middleware::Middleware
trait for our Cache
struct.
Part Three: Implementing the surf::middleware::Middleware
trait
We have our Cache
struct that wraps our HttpCache
struct, but we need to implement the surf::middleware::Middleware
trait for it. This is required to use our Cache
struct as a middleware with surf
. This part may differ depending on the HTTP client you are supporting.
#![allow(unused)] fn main() { #[surf::utils::async_trait] impl<T: CacheManager> surf::middleware::Middleware for Cache<T> { async fn handle( &self, req: Request, client: Client, next: Next<'_>, ) -> std::result::Result<surf::Response, http_types::Error> { let middleware = SurfMiddleware { req, client, next }; let res = self.0.run(middleware).await.map_err(to_http_types_error)?; let mut converted = Response::new(StatusCode::Ok); for header in &res.headers { let val = HeaderValue::from_bytes(header.1.as_bytes().to_vec())?; converted.insert_header(header.0.as_str(), val); } converted.set_status(res.status.try_into()?); converted.set_version(Some(res.version.try_into()?)); converted.set_body(res.body); Ok(surf::Response::from(converted)) } } }
First we create a SurfMiddleware
struct with the provided req
, client
, and next
arguments. Then we call the run
method on our HttpCache
struct with our SurfMiddleware
struct as the argument. This will perform the request and return the HttpResponse
. We then convert the HttpResponse
to a surf::Response
and return it.
Client Implementations
The following client implementations are provided by this crate:
reqwest
The http-cache-reqwest
crate provides a Middleware
implementation for the reqwest
HTTP client.
surf
The http-cache-surf
crate provides a Middleware
implementation for the surf
HTTP client.
ureq
The http-cache-ureq
crate provides a caching wrapper for the ureq
HTTP client. Since ureq is a synchronous HTTP client, this wrapper uses the smol async runtime to integrate with the async http-cache system.
tower
The http-cache-tower
crate provides Tower Layer and Service implementations for caching HTTP requests and responses. It supports both regular and streaming cache operations for memory-efficient handling of large responses.
reqwest
The http-cache-reqwest
crate provides a Middleware
implementation for the reqwest
HTTP client. It accomplishes this by utilizing reqwest_middleware
.
Getting Started
cargo add http-cache-reqwest
Features
manager-cacache
: (default) Enables theCACacheManager
backend cache manager.manager-moka
: Enables theMokaManager
backend cache manager.streaming
: Enables streaming cache support for memory-efficient handling of large response bodies.
Usage
In the following example we will construct our client using the builder provided by reqwest_middleware
with our cache struct from http-cache-reqwest
. This example will use the default mode, default cacache manager, and default http cache options.
After constructing our client, we will make a request to the MDN Caching Docs which should result in an object stored in cache on disk.
use reqwest::Client; use reqwest_middleware::{ClientBuilder, Result}; use http_cache_reqwest::{Cache, CacheMode, CACacheManager, HttpCache, HttpCacheOptions}; #[tokio::main] async fn main() -> Result<()> { let client = ClientBuilder::new(Client::new()) .with(Cache(HttpCache { mode: CacheMode::Default, manager: CACacheManager::default(), options: HttpCacheOptions::default(), })) .build(); client .get("https://developer.mozilla.org/en-US/docs/Web/HTTP/Caching") .send() .await?; Ok(()) }
Streaming Cache Support
For memory-efficient caching of large response bodies, you can use the streaming cache feature. This is particularly useful for handling large files, media content, or API responses without loading the entire response into memory.
To enable streaming cache support, add the streaming
feature to your Cargo.toml
:
[dependencies]
http-cache-reqwest = { version = "1.0", features = ["streaming"] }
Basic Streaming Example
use http_cache::StreamingManager; use http_cache_reqwest::StreamingCache; use reqwest::Client; use reqwest_middleware::ClientBuilder; use std::path::PathBuf; use futures_util::StreamExt; #[tokio::main] async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> { // Create streaming cache manager let cache_manager = StreamingManager::new(PathBuf::from("./cache"), true); let streaming_cache = StreamingCache::new(cache_manager); // Build client with streaming cache let client = ClientBuilder::new(Client::new()) .with(streaming_cache) .build(); // Make request to large content let response = client .get("https://example.com/large-file.zip") .send() .await?; // Stream the response body let mut stream = response.bytes_stream(); let mut total_bytes = 0; while let Some(chunk) = stream.next().await { let chunk = chunk?; total_bytes += chunk.len(); // Process chunk without loading entire response into memory } println!("Downloaded {total_bytes} bytes"); Ok(()) }
Key Benefits of Streaming Cache
- Memory Efficiency: Large responses are streamed directly to/from disk cache without buffering in memory
- Performance: Cached responses can be streamed immediately without waiting for complete download
- Scalability: Handle responses of any size without memory constraints
Non-Cloneable Request Handling
The reqwest middleware gracefully handles requests with non-cloneable bodies (such as multipart forms, streaming uploads, and custom body types). When a request cannot be cloned for caching operations, the middleware automatically:
- Bypasses the cache gracefully: The request proceeds normally without caching
- Performs cache maintenance: Still handles cache deletion and busting operations where possible
- Avoids errors: No "Request object is not cloneable" errors are thrown
This ensures that your application continues to work seamlessly even when using complex request body types.
Example with Multipart Forms
use reqwest::Client; use reqwest_middleware::ClientBuilder; use http_cache_reqwest::{Cache, CacheMode, CACacheManager, HttpCache, HttpCacheOptions}; #[tokio::main] async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> { let client = ClientBuilder::new(Client::new()) .with(Cache(HttpCache { mode: CacheMode::Default, manager: CACacheManager::default(), options: HttpCacheOptions::default(), })) .build(); // Multipart forms are handled gracefully - no caching errors let form = reqwest::multipart::Form::new() .text("field1", "value1") .file("upload", "/path/to/file.txt").await?; let response = client .post("https://httpbin.org/post") .multipart(form) .send() .await?; println!("Status: {}", response.status()); Ok(()) }
Example with Streaming Bodies
use reqwest::Client; use reqwest_middleware::ClientBuilder; use http_cache_reqwest::{Cache, CacheMode, CACacheManager, HttpCache, HttpCacheOptions}; use futures_util::stream; use bytes::Bytes; #[tokio::main] async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> { let client = ClientBuilder::new(Client::new()) .with(Cache(HttpCache { mode: CacheMode::Default, manager: CACacheManager::default(), options: HttpCacheOptions::default(), })) .build(); // Create a streaming body let stream_data = vec!["chunk1", "chunk2", "chunk3"]; let stream = stream::iter(stream_data) .map(|s| Ok::<_, reqwest::Error>(Bytes::from(s))); let body = reqwest::Body::wrap_stream(stream); // Streaming bodies are handled gracefully - no caching errors let response = client .put("https://httpbin.org/put") .body(body) .send() .await?; println!("Status: {}", response.status()); Ok(()) }
surf
The http-cache-surf
crate provides a Middleware
implementation for the surf
HTTP client.
Getting Started
cargo add http-cache-surf
Features
manager-cacache
: (default) Enables theCACacheManager
backend cache manager.manager-moka
: Enables theMokaManager
backend cache manager.
Usage
In the following example we will construct our client with our cache struct from http-cache-surf
. This example will use the default mode, default cacache manager, and default http cache options.
After constructing our client, we will make a request to the MDN Caching Docs which should result in an object stored in cache on disk.
use http_cache_surf::{Cache, CacheMode, CACacheManager, HttpCache, HttpCacheOptions}; use surf::Client; use macro_rules_attribute::apply; use smol_macros::main; #[apply(main!)] async fn main() -> surf::Result<()> { let client = Client::new() .with(Cache(HttpCache { mode: CacheMode::Default, manager: CACacheManager::default(), options: HttpCacheOptions::default(), })); client .get("https://developer.mozilla.org/en-US/docs/Web/HTTP/Caching") .await?; Ok(()) }
ureq
The http-cache-ureq
crate provides HTTP caching for the ureq
HTTP client.
Since ureq is a synchronous HTTP client, this implementation uses the smol async runtime to integrate with the async http-cache system. The caching wrapper preserves ureq's synchronous interface while providing async caching capabilities internally.
Features
json
- Enables JSON request/response support viasend_json()
andinto_json()
methods (requiresserde_json
)manager-cacache
- Enable cacache cache manager (default)manager-moka
- Enable moka cache manager
Basic Usage
Add the dependency to your Cargo.toml
:
[dependencies]
http-cache-ureq = "1.0.0-alpha.1"
Use the CachedAgent
builder to create a cached HTTP client:
use http_cache_ureq::{CachedAgent, CACacheManager, CacheMode}; fn main() -> Result<(), Box<dyn std::error::Error>> { smol::block_on(async { let agent = CachedAgent::builder() .cache_manager(CACacheManager::new("./cache".into(), true)) .cache_mode(CacheMode::Default) .build()?; // This request will be cached according to response headers let response = agent.get("https://httpbin.org/cache/60").call().await?; println!("Status: {}", response.status()); println!("Cached: {}", response.is_cached()); println!("Response: {}", response.into_string()?); // Subsequent identical requests may be served from cache let cached_response = agent.get("https://httpbin.org/cache/60").call().await?; println!("Cached status: {}", cached_response.status()); println!("Is cached: {}", cached_response.is_cached()); println!("Cached response: {}", cached_response.into_string()?); Ok(()) }) }
JSON Support
Enable the json
feature to send and parse JSON data:
[dependencies]
http-cache-ureq = { version = "1.0.0-alpha.1", features = ["json"] }
use http_cache_ureq::{CachedAgent, CACacheManager, CacheMode}; use serde_json::json; fn main() -> Result<(), Box<dyn std::error::Error>> { smol::block_on(async { let agent = CachedAgent::builder() .cache_manager(CACacheManager::new("./cache".into(), true)) .cache_mode(CacheMode::Default) .build()?; // Send JSON data let response = agent.post("https://httpbin.org/post") .send_json(json!({"key": "value"})) .await?; // Parse JSON response let json: serde_json::Value = response.into_json()?; println!("Response: {}", json); Ok(()) }) }
Cache Modes
Control caching behavior with different modes:
use http_cache_ureq::{CachedAgent, CACacheManager, CacheMode}; fn main() -> Result<(), Box<dyn std::error::Error>> { smol::block_on(async { let agent = CachedAgent::builder() .cache_manager(CACacheManager::new("./cache".into(), true)) .cache_mode(CacheMode::ForceCache) // Cache everything, ignore headers .build()?; // This will be cached even if headers say not to cache let response = agent.get("https://httpbin.org/uuid").call().await?; println!("Response: {}", response.into_string()?); Ok(()) }) }
Custom ureq Configuration
Preserve your ureq agent configuration while adding caching:
use http_cache_ureq::{CachedAgent, CACacheManager, CacheMode}; use std::time::Duration; fn main() -> Result<(), Box<dyn std::error::Error>> { smol::block_on(async { // Create custom ureq configuration let config = ureq::config::Config::builder() .timeout_global(Some(Duration::from_secs(30))) .user_agent("MyApp/1.0") .build(); let agent = CachedAgent::builder() .agent_config(config) .cache_manager(CACacheManager::new("./cache".into(), true)) .cache_mode(CacheMode::Default) .build()?; let response = agent.get("https://httpbin.org/cache/60").call().await?; println!("Response: {}", response.into_string()?); Ok(()) }) }
In-Memory Caching
Use the Moka in-memory cache:
[dependencies]
http-cache-ureq = { version = "1.0.0-alpha.1", features = ["manager-moka"] }
use http_cache_ureq::{CachedAgent, MokaManager, MokaCache, CacheMode}; fn main() -> Result<(), Box<dyn std::error::Error>> { smol::block_on(async { let agent = CachedAgent::builder() .cache_manager(MokaManager::new(MokaCache::new(1000))) // Max 1000 entries .cache_mode(CacheMode::Default) .build()?; let response = agent.get("https://httpbin.org/cache/60").call().await?; println!("Response: {}", response.into_string()?); Ok(()) }) }
Maximum TTL Control
Control cache expiration times, particularly useful with IgnoreRules
mode:
use http_cache_ureq::{CachedAgent, CACacheManager, CacheMode, HttpCacheOptions}; use std::time::Duration; fn main() -> Result<(), Box<dyn std::error::Error>> { smol::block_on(async { let agent = CachedAgent::builder() .cache_manager(CACacheManager::new("./cache".into(), true)) .cache_mode(CacheMode::IgnoreRules) // Ignore server cache headers .cache_options(HttpCacheOptions { max_ttl: Some(Duration::from_secs(300)), // Limit cache to 5 minutes maximum ..Default::default() }) .build()?; // This will be cached for max 5 minutes even if server says cache longer let response = agent.get("https://httpbin.org/cache/3600").call().await?; println!("Response: {}", response.into_string()?); Ok(()) }) }
Implementation Notes
- The wrapper preserves ureq's synchronous interface while using async caching internally
- The
http_status_as_error
setting is automatically disabled to ensure proper cache operation - All HTTP methods are supported (GET, POST, PUT, DELETE, HEAD, etc.)
- Cache invalidation occurs for non-GET/HEAD requests to the same resource
- Only GET and HEAD requests are cached by default
max_ttl
provides expiration control when usingCacheMode::IgnoreRules
tower
The http-cache-tower
crate provides Tower Layer and Service implementations that add HTTP caching capabilities to your HTTP clients and services. It supports both regular and full streaming cache operations for memory-efficient handling of large responses.
Getting Started
cargo add http-cache-tower
Features
manager-cacache
: (default) Enables theCACacheManager
backend cache manager.manager-moka
: Enables theMokaManager
backend cache manager.streaming
: Enables streaming cache support for memory-efficient handling of large response bodies.
Basic Usage
Here's a basic example using the regular HTTP cache layer:
use http_cache_tower::HttpCacheLayer; use http_cache::CACacheManager; use tower::{ServiceBuilder, ServiceExt}; use http::{Request, Response}; use http_body_util::Full; use bytes::Bytes; use std::path::PathBuf; #[tokio::main] async fn main() -> Result<(), Box<dyn std::error::Error>> { // Create a cache manager let cache_manager = CACacheManager::new(PathBuf::from("./cache"), false); // Create the cache layer let cache_layer = HttpCacheLayer::new(cache_manager); // Build your service stack let service = ServiceBuilder::new() .layer(cache_layer) .service_fn(|_req: Request<Full<Bytes>>| async { Ok::<_, std::convert::Infallible>( Response::new(Full::new(Bytes::from("Hello, world!"))) ) }); // Use the service let request = Request::builder() .uri("https://httpbin.org/cache/300") .body(Full::new(Bytes::new()))?; let response = service.oneshot(request).await?; println!("Status: {}", response.status()); Ok(()) }
Streaming Usage
For large responses or when memory efficiency is important, use the streaming cache layer with the streaming
feature:
[dependencies]
http-cache-tower = { version = "1.0", features = ["streaming"] }
use http_cache_tower::HttpCacheStreamingLayer; use http_cache::StreamingManager; use tower::{ServiceBuilder, ServiceExt}; use http::{Request, Response}; use http_body_util::Full; use bytes::Bytes; use std::path::PathBuf; #[tokio::main] async fn main() -> Result<(), Box<dyn std::error::Error>> { // Create a streaming cache manager let streaming_manager = StreamingManager::new(PathBuf::from("./cache")); // Create the streaming cache layer let cache_layer = HttpCacheStreamingLayer::new(streaming_manager); // Build your service stack let service = ServiceBuilder::new() .layer(cache_layer) .service_fn(|_req: Request<Full<Bytes>>| async { Ok::<_, std::convert::Infallible>( Response::new(Full::new(Bytes::from("Large response data..."))) ) }); // Use the service - responses are streamed without buffering entire body let request = Request::builder() .uri("https://example.com/large-file") .body(Full::new(Bytes::new()))?; let response = service.oneshot(request).await?; println!("Status: {}", response.status()); Ok(()) }
Integration with Hyper Client
The tower layers can be easily integrated with Hyper clients:
use http_cache_tower::HttpCacheLayer; use http_cache::CACacheManager; use hyper_util::client::legacy::Client; use hyper_util::rt::TokioExecutor; use tower::{ServiceBuilder, ServiceExt}; #[tokio::main] async fn main() -> Result<(), Box<dyn std::error::Error>> { let cache_manager = CACacheManager::default(); let cache_layer = HttpCacheLayer::new(cache_manager); let client = Client::builder(TokioExecutor::new()).build_http(); let cached_client = ServiceBuilder::new() .layer(cache_layer) .service(client); // Now use cached_client for HTTP requests Ok(()) }
Backend Cache Manager Implementations
The following backend cache manager implementations are provided by this crate:
cacache
cacache
is a high-performance, concurrent, content-addressable disk cache, optimized for async APIs. Provides traditional buffered caching.
moka
moka
is a fast, concurrent cache library inspired by the Caffeine library for Java. Provides in-memory caching with traditional buffering.
quick_cache
quick_cache
is a lightweight and high performance concurrent cache optimized for low cache overhead. Provides traditional buffered caching operations.
streaming_cache
StreamingManager
is a file-based streaming cache manager that does not buffer response bodies in memory. Suitable for handling large responses efficiently.
cacache
cacache
is a high-performance, concurrent, content-addressable disk cache, optimized for async APIs. It provides traditional buffered caching for memory-efficient handling of responses.
Getting Started
The cacache
backend cache manager is provided by the http-cache
crate and is enabled by default. The http-cache-reqwest
, http-cache-surf
, and http-cache-tower
crates all expose the types so no need to pull in the http-cache
directly unless you need to implement your own client.
reqwest
cargo add http-cache-reqwest
surf
cargo add http-cache-surf
tower
cargo add http-cache-tower
Working with the manager directly
First construct your manager instance. This example will use the default cache directory.
#![allow(unused)] fn main() { let manager = CACacheManager::default(); }
You can also specify the cache directory and if you want the cache entries to be removed fully from disk.
#![allow(unused)] fn main() { let manager = CACacheManager::new("./my-cache".into(), true); }
You can attempt to retrieve a record from the cache using the get
method. This method accepts a &str
as the cache key and returns an Result<Option<(HttpResponse, CachePolicy)>, BoxError>
.
#![allow(unused)] fn main() { let response = manager.get("my-cache-key").await?; }
You can store a record in the cache using the put
method. This method accepts a String
as the cache key, a HttpResponse
as the response, and a CachePolicy
as the policy object. It returns an Result<HttpResponse, BoxError>
. The below example constructs the response and policy manually, normally this would be handled by the middleware.
#![allow(unused)] fn main() { let url = Url::parse("http://example.com")?; let response = HttpResponse { body: TEST_BODY.to_vec(), headers: Default::default(), status: 200, url: url.clone(), version: HttpVersion::Http11, }; let req = http::Request::get("http://example.com").body(())?; let res = http::Response::builder() .status(200) .body(TEST_BODY.to_vec())?; let policy = CachePolicy::new(&req, &res); let response = manager.put("my-cache-key".into(), response, policy).await?; }
You can remove a record from the cache using the delete
method. This method accepts a &str
as the cache key and returns an Result<(), BoxError>
.
#![allow(unused)] fn main() { manager.delete("my-cache-key").await?; }
You can also clear the entire cache using the clear
method. This method accepts no arguments and returns an Result<(), BoxError>
.
#![allow(unused)] fn main() { manager.clear().await?; }
moka
moka
is a fast, concurrent cache library inspired by the Caffeine library for Java. The moka manager provides traditional buffered caching operations for fast in-memory access.
Getting Started
The moka
backend cache manager is provided by the http-cache
crate but is not enabled by default. The http-cache-reqwest
, http-cache-surf
, and http-cache-tower
crates all expose the types so no need to pull in the http-cache
directly unless you need to implement your own client.
reqwest
cargo add http-cache-reqwest --no-default-features -F manager-moka
surf
cargo add http-cache-surf --no-default-features -F manager-moka
tower
cargo add http-cache-tower --no-default-features -F manager-moka
Working with the manager directly
First construct your manager instance. This example will use the default cache configuration (42).
#![allow(unused)] fn main() { let manager = Arc::new(MokaManager::default()); }
You can also specify other configuration options. This uses the new
methods on both MokaManager
and moka::future::Cache
to construct a cache with a maximum capacity of 100 items.
#![allow(unused)] fn main() { let manager = Arc::new(MokaManager::new(moka::future::Cache::new(100))); }
You can attempt to retrieve a record from the cache using the get
method. This method accepts a &str
as the cache key and returns an Result<Option<(HttpResponse, CachePolicy)>, BoxError>
.
#![allow(unused)] fn main() { let response = manager.get("my-cache-key").await?; }
You can store a record in the cache using the put
method. This method accepts a String
as the cache key, a HttpResponse
as the response, and a CachePolicy
as the policy object. It returns an Result<HttpResponse, BoxError>
. The below example constructs the response and policy manually, normally this would be handled by the middleware.
#![allow(unused)] fn main() { let url = Url::parse("http://example.com")?; let response = HttpResponse { body: TEST_BODY.to_vec(), headers: Default::default(), status: 200, url: url.clone(), version: HttpVersion::Http11, }; let req = http::Request::get("http://example.com").body(())?; let res = http::Response::builder() .status(200) .body(TEST_BODY.to_vec())?; let policy = CachePolicy::new(&req, &res); let response = manager.put("my-cache-key".into(), response, policy).await?; }
You can remove a record from the cache using the delete
method. This method accepts a &str
as the cache key and returns an Result<(), BoxError>
.
#![allow(unused)] fn main() { manager.delete("my-cache-key").await?; }
You can also clear the entire cache using the clear
method. This method accepts no arguments and returns an Result<(), BoxError>
.
#![allow(unused)] fn main() { manager.clear().await?; }
quick_cache
quick_cache
is a lightweight and high performance concurrent cache optimized for low cache overhead. The http-cache-quickcache
implementation provides traditional buffered caching capabilities.
Getting Started
The quick_cache
backend cache manager is provided by the http-cache-quickcache
crate.
cargo add http-cache-quickcache
Basic Usage with Tower
The quickcache manager works excellently with Tower services:
#![allow(unused)] fn main() { use tower::{Service, ServiceExt}; use http::{Request, Response, StatusCode}; use http_body_util::Full; use bytes::Bytes; use http_cache_quickcache::QuickManager; use std::convert::Infallible; // Example Tower service that uses QuickManager for caching #[derive(Clone)] struct CachingService { cache_manager: QuickManager, } impl Service<Request<Full<Bytes>>> for CachingService { type Response = Response<Full<Bytes>>; type Error = Box<dyn std::error::Error + Send + Sync>; type Future = std::pin::Pin<Box<dyn std::future::Future<Output = Result<Self::Response, Self::Error>> + Send>>; fn poll_ready(&mut self, _cx: &mut std::task::Context<'_>) -> std::task::Poll<Result<(), Self::Error>> { std::task::Poll::Ready(Ok(())) } fn call(&mut self, req: Request<Full<Bytes>>) -> Self::Future { let manager = self.cache_manager.clone(); Box::pin(async move { // Cache logic using the manager would go here let response = Response::builder() .status(StatusCode::OK) .body(Full::new(Bytes::from("Hello from cached service!")))?; Ok(response) }) } } }
Working with the manager directly
First construct your manager instance. This example will use the default cache configuration.
#![allow(unused)] fn main() { let manager = Arc::new(QuickManager::default()); }
You can also specify other configuration options. This uses the new
methods on both QuickManager
and quick_cache::sync::Cache
to construct a cache with a maximum capacity of 100 items.
#![allow(unused)] fn main() { let manager = Arc::new(QuickManager::new(quick_cache::sync::Cache::new(100))); }
Traditional Cache Operations
You can attempt to retrieve a record from the cache using the get
method. This method accepts a &str
as the cache key and returns an Result<Option<(HttpResponse, CachePolicy)>, BoxError>
.
#![allow(unused)] fn main() { let response = manager.get("my-cache-key").await?; }
You can store a record in the cache using the put
method. This method accepts a String
as the cache key, a HttpResponse
as the response, and a CachePolicy
as the policy object. It returns an Result<HttpResponse, BoxError>
. The below example constructs the response and policy manually, normally this would be handled by the middleware.
#![allow(unused)] fn main() { let url = Url::parse("http://example.com")?; let response = HttpResponse { body: TEST_BODY.to_vec(), headers: Default::default(), status: 200, url: url.clone(), version: HttpVersion::Http11, }; let req = http::Request::get("http://example.com").body(())?; let res = http::Response::builder() .status(200) .body(TEST_BODY.to_vec())?; let policy = CachePolicy::new(&req, &res); let response = manager.put("my-cache-key".into(), response, policy).await?; }
You can remove a record from the cache using the delete
method. This method accepts a &str
as the cache key and returns an Result<(), BoxError>
.
#![allow(unused)] fn main() { manager.delete("my-cache-key").await?; }
StreamingManager (Streaming Cache)
StreamingManager
is a file-based streaming cache manager that does not buffer response bodies in memory. This implementation stores response metadata and body content separately, enabling memory-efficient handling of large responses.
Getting Started
The StreamingManager
is built into the core http-cache
crate and is available when the streaming
feature is enabled.
[dependencies]
http-cache = { version = "1.0", features = ["streaming", "streaming-tokio"] }
Or for smol runtime:
[dependencies]
http-cache = { version = "1.0", features = ["streaming", "streaming-smol"] }
Basic Usage
use http_cache::{StreamingManager, StreamingBody, HttpStreamingCache}; use std::path::PathBuf; #[tokio::main] async fn main() -> Result<(), Box<dyn std::error::Error>> { // Create a file-based streaming cache manager let cache_dir = PathBuf::from("./streaming-cache"); let manager = StreamingManager::new(cache_dir); // Use with streaming cache let cache = HttpStreamingCache::new(manager); Ok(()) }
Usage with Tower
The streaming cache manager works with Tower's HttpCacheStreamingLayer
:
use http_cache::{StreamingManager, HttpCacheStreamingLayer}; use tower::{Service, ServiceExt}; use http::{Request, Response, StatusCode}; use http_body_util::Full; use bytes::Bytes; use std::path::PathBuf; #[tokio::main] async fn main() -> Result<(), Box<dyn std::error::Error>> { // Create streaming cache manager let cache_dir = PathBuf::from("./cache"); let manager = StreamingManager::new(cache_dir); // Create streaming cache layer let cache_layer = HttpCacheStreamingLayer::new(manager); // Your base service let service = tower::service_fn(|_req: Request<Full<Bytes>>| async { Ok::<_, std::convert::Infallible>( Response::builder() .status(StatusCode::OK) .header("cache-control", "max-age=3600") .body(Full::new(Bytes::from("Large response data...")))? ) }); // Wrap with caching let cached_service = cache_layer.layer(service); // Make requests let request = Request::builder() .uri("https://example.com/large-file") .body(Full::new(Bytes::new()))?; let response = cached_service.oneshot(request).await?; println!("Response status: {}", response.status()); Ok(()) }
Working with the manager directly
Creating a manager
#![allow(unused)] fn main() { use http_cache::StreamingManager; use std::path::PathBuf; // Create with custom cache directory let cache_dir = PathBuf::from("./my-streaming-cache"); let manager = StreamingManager::new(cache_dir); }
Streaming Cache Operations
Caching a streaming response
#![allow(unused)] fn main() { use http_cache::StreamingManager; use http::{Request, Response, StatusCode}; use http_body_util::Full; use bytes::Bytes; use http_cache_semantics::CachePolicy; use url::Url; let manager = StreamingManager::new(PathBuf::from("./cache")); // Create a large response to cache let large_data = vec![b'X'; 10_000_000]; // 10MB response let response = Response::builder() .status(StatusCode::OK) .header("cache-control", "max-age=3600, public") .header("content-type", "application/octet-stream") .body(Full::new(Bytes::from(large_data)))?; // Create cache policy let request = Request::builder() .method("GET") .uri("https://example.com/large-file") .body(())?; let policy = CachePolicy::new(&request, &Response::builder() .status(200) .header("cache-control", "max-age=3600, public") .body(vec![])?); // Cache the response (content stored to disk, metadata separate) let url = Url::parse("https://example.com/large-file")?; let cached_response = manager.put( "GET:https://example.com/large-file".to_string(), response, policy, url, ).await?; println!("Cached response without loading into memory!"); }
Retrieving a streaming response
#![allow(unused)] fn main() { // Retrieve from cache - returns a streaming body let cached = manager.get("GET:https://example.com/large-file").await?; if let Some((response, policy)) = cached { println!("Cache hit! Status: {}", response.status()); // The response body streams directly from disk let body = response.into_body(); // Process the streaming body without loading it all into memory let mut body_stream = std::pin::pin!(body); while let Some(frame_result) = body_stream.frame().await { let frame = frame_result?; if let Some(chunk) = frame.data_ref() { // Process chunk without accumulating in memory println!("Processing chunk of {} bytes", chunk.len()); } } } else { println!("Cache miss"); } }
Deleting cached entries
#![allow(unused)] fn main() { // Remove from cache (deletes both metadata and content files) manager.delete("GET:https://example.com/large-file").await?; }
Storage Structure
The StreamingManager organizes cache files as follows:
cache-directory/
├── cache-v2/
│ ├── metadata/
│ │ ├── 1a2b3c4d....json # Response metadata (headers, status, policy)
│ │ └── 5e6f7g8h....json
│ └── content/
│ ├── blake3_hash1 # Raw response body content
│ └── blake3_hash2
- Metadata files: JSON files containing response status, headers, cache policy, and content digest
- Content files: Raw binary content files identified by Blake3 hash for deduplication
- Content-addressable: Identical content is stored only once regardless of URL
Configuration
The StreamingManager supports basic configuration through StreamingCacheConfig
:
#![allow(unused)] fn main() { use http_cache::{StreamingManager, StreamingCacheConfig}; use std::path::PathBuf; // Create with default configuration let manager = StreamingManager::new(PathBuf::from("./cache")); // Or create with custom configuration let config = StreamingCacheConfig { max_cache_size: Some(1024 * 1024 * 1024), // 1GB limit max_entries: Some(10000), // Maximum 10k cached entries streaming_buffer_size: 16384, // 16KB streaming buffer }; let manager = StreamingManager::new_with_config(PathBuf::from("./cache"), config); // For existing cache directories, use this to rebuild reference counts let manager = StreamingManager::new_with_existing_cache_and_config( PathBuf::from("./cache"), config ).await?; }
Configuration Options
max_cache_size
: Optional maximum cache size in bytes. When exceeded, least recently used entries are evicted.max_entries
: Optional maximum number of cached entries. When exceeded, LRU eviction occurs.streaming_buffer_size
: Buffer size in bytes for streaming operations (default: 8192).