tower-server
The http-cache-tower-server crate provides Tower Layer and Service implementations for server-side HTTP response caching. Unlike client-side caching, this middleware caches your own application's responses to reduce database queries, computation, and improve response times.
Key Differences from Client-Side Caching
Client-Side (http-cache-tower): Caches responses from external APIs you're calling
Server-Side (http-cache-tower-server): Caches responses your application generates
Critical: Server-side cache middleware must be placed AFTER routing in your middleware stack to preserve request extensions like path parameters (see Issue #121).
Getting Started
cargo add http-cache-tower-server
Features
manager-cacache: (default) Enables theCACacheManagerbackend cache manager.manager-moka: Enables theMokaManagerbackend cache manager.
Basic Usage with Axum
use axum::{ routing::get, Router, extract::Path, }; use http_cache_tower_server::ServerCacheLayer; use http_cache::CACacheManager; use std::path::PathBuf; #[tokio::main] async fn main() { // Create cache manager let cache_manager = CACacheManager::new(PathBuf::from("./cache"), false); // Create the server cache layer let cache_layer = ServerCacheLayer::new(cache_manager); // Build your Axum app let app = Router::new() .route("/users/:id", get(get_user)) .route("/posts/:id", get(get_post)) // IMPORTANT: Place cache layer AFTER routing .layer(cache_layer); // Run the server let listener = tokio::net::TcpListener::bind("127.0.0.1:3000") .await .unwrap(); axum::serve(listener, app).await.unwrap(); } async fn get_user(Path(id): Path<u32>) -> String { // Expensive database query or computation format!("User {}", id) } async fn get_post(Path(id): Path<u32>) -> String { format!("Post {}", id) }
Cache Control with Response Headers
The middleware respects standard HTTP Cache-Control headers from your handlers:
#![allow(unused)] fn main() { use axum::{ response::{IntoResponse, Response}, http::header, }; async fn cacheable_handler() -> Response { ( [(header::CACHE_CONTROL, "max-age=300")], // Cache for 5 minutes "This response will be cached" ).into_response() } async fn no_cache_handler() -> Response { ( [(header::CACHE_CONTROL, "no-store")], // Don't cache "This response will NOT be cached" ).into_response() } async fn private_handler() -> Response { ( [(header::CACHE_CONTROL, "private")], // User-specific data "This response will NOT be cached (shared cache)" ).into_response() } }
RFC 7234 Compliance
This implementation acts as a shared cache per RFC 7234:
Automatically Rejects
no-storedirectiveno-cachedirective (requires revalidation, which is not supported)privatedirective (shared caches cannot store private responses)- Non-2xx status codes
Supports
max-age: Cache lifetime in secondss-maxage: Shared cache specific lifetime (takes precedence over max-age)public: Makes response cacheableExpires: Fallback header when Cache-Control is absent
Cache Key Strategies
DefaultKeyer (Default)
Caches based on HTTP method and path:
#![allow(unused)] fn main() { use http_cache_tower_server::{ServerCacheLayer, DefaultKeyer}; let cache_layer = ServerCacheLayer::new(cache_manager); // GET /users/123 and GET /users/456 are cached separately }
QueryKeyer
Includes query parameters in the cache key:
#![allow(unused)] fn main() { use http_cache_tower_server::{ServerCacheLayer, QueryKeyer}; let cache_layer = ServerCacheLayer::with_keyer(cache_manager, QueryKeyer); // GET /search?q=rust and GET /search?q=python are cached separately }
CustomKeyer
For advanced use cases like content negotiation or user-specific caching:
#![allow(unused)] fn main() { use http_cache_tower_server::{ServerCacheLayer, CustomKeyer}; // Example: Include Accept-Language header in cache key let keyer = CustomKeyer::new(|req: &http::Request<()>| { let lang = req.headers() .get("accept-language") .and_then(|v| v.to_str().ok()) .unwrap_or("en"); format!("{} {} lang:{}", req.method(), req.uri().path(), lang) }); let cache_layer = ServerCacheLayer::with_keyer(cache_manager, keyer); }
Configuration Options
#![allow(unused)] fn main() { use http_cache_tower_server::{ServerCacheLayer, ServerCacheOptions}; use std::time::Duration; let options = ServerCacheOptions { // Default TTL when no Cache-Control header present default_ttl: Some(Duration::from_secs(60)), // Maximum TTL (even if response specifies longer) max_ttl: Some(Duration::from_secs(3600)), // Minimum TTL (even if response specifies shorter) min_ttl: Some(Duration::from_secs(10)), // Add X-Cache: HIT/MISS headers for debugging cache_status_headers: true, // Maximum body size to cache (bytes) max_body_size: 128 * 1024 * 1024, // 128 MB // Cache responses without Cache-Control header cache_by_default: false, ..Default::default() }; let cache_layer = ServerCacheLayer::new(cache_manager) .with_options(options); }
Security Warnings
Shared Cache Behavior
This is a shared cache - cached responses are served to ALL users. Improper configuration can leak user-specific data.
Do NOT Cache
- Authenticated endpoints (unless using appropriate CustomKeyer)
- User-specific data (unless keyed by user/session ID)
- Responses with sensitive information
Safe Approaches
Option 1: Use Cache-Control: private
#![allow(unused)] fn main() { async fn user_specific_handler() -> Response { ( [(header::CACHE_CONTROL, "private")], "User-specific data - won't be cached" ).into_response() } }
Option 2: Include user ID in cache key
#![allow(unused)] fn main() { let keyer = CustomKeyer::new(|req: &http::Request<()>| { let user_id = req.headers() .get("x-user-id") .and_then(|v| v.to_str().ok()) .unwrap_or("anonymous"); format!("{} {} user:{}", req.method(), req.uri().path(), user_id) }); }
Option 3: Don't cache at all
#![allow(unused)] fn main() { async fn sensitive_handler() -> Response { ( [(header::CACHE_CONTROL, "no-store")], "Sensitive data - never cached" ).into_response() } }
Content Negotiation
The middleware extracts Vary headers but does not automatically enforce them. For content negotiation, use a CustomKeyer:
#![allow(unused)] fn main() { // Example: Cache different responses based on Accept-Language let keyer = CustomKeyer::new(|req: &http::Request<()>| { let lang = req.headers() .get("accept-language") .and_then(|v| v.to_str().ok()) .and_then(|s| s.split(',').next()) .unwrap_or("en"); format!("{} {} lang:{}", req.method(), req.uri().path(), lang) }); }
Cache Inspection
Responses include X-Cache headers when cache_status_headers is enabled:
X-Cache: HIT- Response served from cacheX-Cache: MISS- Response generated by handler and cached (if cacheable)- No header - Response not cacheable (or headers disabled)
Complete Example
use axum::{ routing::get, Router, extract::Path, response::{IntoResponse, Response}, http::header, }; use http_cache_tower_server::{ServerCacheLayer, ServerCacheOptions, QueryKeyer}; use http_cache::CACacheManager; use std::time::Duration; use std::path::PathBuf; #[tokio::main] async fn main() { // Configure cache manager let cache_manager = CACacheManager::new(PathBuf::from("./cache"), false); // Configure cache options let options = ServerCacheOptions { default_ttl: Some(Duration::from_secs(60)), max_ttl: Some(Duration::from_secs(3600)), cache_status_headers: true, ..Default::default() }; // Create cache layer with query parameter support let cache_layer = ServerCacheLayer::with_keyer(cache_manager, QueryKeyer) .with_options(options); // Build app let app = Router::new() .route("/users/:id", get(get_user)) .route("/search", get(search)) .route("/admin/stats", get(admin_stats)) .layer(cache_layer); // AFTER routing let listener = tokio::net::TcpListener::bind("127.0.0.1:3000") .await .unwrap(); axum::serve(listener, app).await.unwrap(); } // Cacheable for 5 minutes async fn get_user(Path(id): Path<u32>) -> Response { ( [(header::CACHE_CONTROL, "max-age=300")], format!("User {}", id) ).into_response() } // Cacheable with query parameters async fn search(query: axum::extract::Query<std::collections::HashMap<String, String>>) -> Response { ( [(header::CACHE_CONTROL, "max-age=60")], format!("Search results: {:?}", query) ).into_response() } // Never cached (admin data) async fn admin_stats() -> Response { ( [(header::CACHE_CONTROL, "no-store")], "Admin statistics - not cached" ).into_response() }
Best Practices
- Place middleware after routing to preserve request extensions
- Set appropriate Cache-Control headers in your handlers
- Use
privatedirective for user-specific responses - Monitor cache hit rates using X-Cache headers
- Set reasonable TTL limits to prevent stale data
- Use CustomKeyer for content negotiation or user-specific caching
- Don't cache authenticated endpoints without proper keying
Troubleshooting
Path parameters not working
Problem: Axum path extractors fail with cached responses
Solution: Ensure cache layer is placed AFTER routing:
#![allow(unused)] fn main() { // ❌ Wrong - cache layer before routing let app = Router::new() .layer(cache_layer) // Too early! .route("/users/:id", get(handler)); // ✅ Correct - cache layer after routing let app = Router::new() .route("/users/:id", get(handler)) .layer(cache_layer); // After routing }
Responses not being cached
Possible causes:
- Response has
no-store,no-cache, orprivatedirective - Response is not 2xx status code
- Response body exceeds
max_body_size cache_by_defaultis false and no Cache-Control header present
Solution: Add appropriate Cache-Control headers:
#![allow(unused)] fn main() { async fn handler() -> Response { ( [(header::CACHE_CONTROL, "max-age=300")], "Response body" ).into_response() } }
User data leaking between requests
Problem: Cached user-specific responses served to other users
Solution: Use CustomKeyer with user identifier:
#![allow(unused)] fn main() { let keyer = CustomKeyer::new(|req: &http::Request<()>| { let user = req.headers() .get("x-user-id") .and_then(|v| v.to_str().ok()) .unwrap_or("anonymous"); format!("{} {} user:{}", req.method(), req.uri().path(), user) }); }
Or use Cache-Control: private to prevent caching entirely.
Performance Considerations
- Cache writes are fire-and-forget (non-blocking)
- Cache lookups are async but fast (especially with in-memory managers)
- Body buffering is required (responses are fully buffered before caching)
- Consider using moka manager for frequently accessed data
- Use cacache manager for larger datasets with disk persistence
Comparison with Other Frameworks
| Feature | http-cache-tower-server | Django Cache | NGINX FastCGI |
|---|---|---|---|
| Middleware-based | ✅ | ✅ | ❌ |
| RFC 7234 compliant | ✅ | ⚠️ Partial | ⚠️ Partial |
| Pluggable backends | ✅ | ✅ | ❌ |
| Custom cache keys | ✅ | ✅ | ✅ |
| Type-safe | ✅ | ❌ | ❌ |
| Async-first | ✅ | ❌ | ✅ |