Automatic HTTP routing for Docker containers β A Traefik-based proxy that gives your containers clean domain names like myapp.local
instead of dealing with localhost:8080
port chaos.
Simply add VIRTUAL_HOST=myapp.local
to any container or use native Traefik labels, and your applications become accessible with both HTTP and HTTPS automatically. No port management, no /etc/hosts
editing, no hunting for the right port number. Only explicitly configured containers are exposed, keeping your development environment secure by default.
- π Automatic Container Discovery - Zero-configuration HTTP routing for containers with
VIRTUAL_HOST
environment variables or Traefik labels - π Built-in DNS Server - Resolves custom domains (
.loc
,.dev
, etc.) to localhost, eliminating manual/etc/hosts
editing - π Dynamic Network Management - Automatically joins Docker networks containing manageable containers for seamless routing
- π Automatic HTTPS Support - Provides both HTTP and HTTPS routes with auto-generated certificates and mkcert integration for trusted local certificates
- π Monitoring Ready - Optional Prometheus metrics and Grafana dashboards for traffic monitoring and performance analysis
Note: We thank the codekitchen/dinghy-http-proxy project for the inspiration and for serving us well over the years. Spark HTTP Proxy includes a compatibility layer that supports the
VIRTUAL_HOST
andVIRTUAL_PORT
environment variables from the original project, while providing enhanced functionality for broader use cases and improved maintainability.
# Install Spark HTTP Proxy
mkdir -p ${HOME}/.local/spark/http-proxy
git clone [email protected]:sparkfabrik/http-proxy.git ${HOME}/.local/spark/http-proxy/src
sudo ln -s ${HOME}/.local/spark/http-proxy/src/bin/spark-http-proxy /usr/local/bin/spark-http-proxy
sudo chmod +x /usr/local/bin/spark-http-proxy
spark-http-proxy install-completion
# Or alternatively if you like to live on the edge.
bash <(curl -fsSL https://github.com/raw/sparkfabrik/http-proxy/main/bin/install.sh)
# Start the HTTP proxy
spark-http-proxy start
# Generate trusted SSL certificates for your domains
spark-http-proxy generate-mkcert "*.spark.loc"
# Test with any container
docker run -d -e VIRTUAL_HOST=test.spark.loc nginx
# Access your app with HTTPS
curl https://test.spark.loc
That's it! π Your container is now accessible at https://test.spark.loc
with a trusted certificate.
# View status and dashboard
spark-http-proxy status
# Start with monitoring (Prometheus + Grafana)
spark-http-proxy start-with-metrics
# Configure system DNS (eliminates need for manual /etc/hosts editing)
spark-http-proxy configure-dns
For more examples and advanced configurations, check the examples/
directory.
Important: Only containers with explicit configuration are automatically managed by the proxy. Containers without VIRTUAL_HOST
environment variables or traefik.*
labels are ignored to ensure security and prevent unintended exposure.
Add these environment variables to any container you want to be automatically routed:
# docker-compose.yml
services:
myapp:
image: nginx:alpine
environment:
- VIRTUAL_HOST=myapp.local # Required: your custom domain
- VIRTUAL_PORT=8080 # Optional: defaults to exposed port or 80
expose:
- "8080"
- Single domain:
VIRTUAL_HOST=myapp.local
- Multiple domains:
VIRTUAL_HOST=app.local,api.local
- Wildcards:
VIRTUAL_HOST=*.myapp.local
- Regex patterns:
VIRTUAL_HOST=~^api\\..*\\.local$
The proxy uses opt-in container discovery (exposedByDefault: false
). Only containers with explicit configuration are managed:
- Dinghy: Containers with
VIRTUAL_HOST=domain.local
environment variable - Traefik: Containers with labels starting with
traefik.*
Unmanaged containers are ignored and never exposed.
The proxy automatically joins Docker networks that contain manageable containers, enabling seamless routing without manual network configuration. This process is handled by the join-networks
service.
π Detailed Network Joining Flow Documentation - Complete technical documentation with flow diagrams explaining how automatic network discovery and joining works.
The HTTP proxy includes a built-in DNS server that automatically resolves configured domains to localhost, eliminating the need to manually edit /etc/hosts
or configure system DNS.
The DNS server supports both Top-Level Domains (TLDs) and specific domains:
# docker-compose.yml
services:
dns:
environment:
# Configure which domains to handle (comma-separated)
- HTTP_PROXY_DNS_TLDS=loc,dev # Handle any *.loc and *.dev domains
- HTTP_PROXY_DNS_TLDS=spark.loc,api.dev # Handle only specific domains
- HTTP_PROXY_DNS_TLDS=loc # Handle any *.loc domains (default)
# Where to resolve domains (default: 127.0.0.1)
- HTTP_PROXY_DNS_TARGET_IP=127.0.0.1
# DNS server port (default: 19322)
- HTTP_PROXY_DNS_PORT=19322
Configure TLDs to handle any subdomain automatically:
# Environment: HTTP_PROXY_DNS_TLDS=loc
β
myapp.loc β 127.0.0.1
β
api.loc β 127.0.0.1
β
anything.loc β 127.0.0.1
β myapp.dev β Not handled
Support multiple development TLDs:
# Environment: HTTP_PROXY_DNS_TLDS=loc,dev,docker
β
myapp.loc β 127.0.0.1
β
api.dev β 127.0.0.1
β
service.docker β 127.0.0.1
Handle only specific domains for precise control:
# Environment: HTTP_PROXY_DNS_TLDS=spark.loc,api.dev
β
spark.loc β 127.0.0.1
β
api.dev β 127.0.0.1
β other.loc β Not handled
β different.dev β Not handled
When certificates are generated or updated, restart the proxy to load the new certificates:
docker compose restart
While VIRTUAL_HOST
environment variables provide simple automatic routing, you can also use Traefik labels for more advanced configuration. Both methods work together seamlessly.
services:
myapp:
image: nginx:alpine
labels:
# Define the routing rule - which domain/path routes to this service
- "traefik.http.routers.myapp.rule=Host(`myapp.docker`)"
# Specify which entrypoint to use (http = port 80)
- "traefik.http.routers.myapp.entrypoints=http"
# Set the target port for load balancing
- "traefik.http.services.myapp.loadbalancer.server.port=80"
Note:
traefik.enable=true
is not required since auto-discovery is always enabled in this proxy.
Label | Purpose | Example |
---|---|---|
Router Rule | Defines which requests route to this service | traefik.http.routers.myapp.rule=Host(\ myapp.docker`)` |
Entrypoints | Which proxy port to listen on | traefik.http.routers.myapp.entrypoints=http |
Service Port | Target port on the container | traefik.http.services.myapp.loadbalancer.server.port=8080 |
To effectively use Traefik labels, it helps to understand the key concepts:
An entrypoint is where Traefik listens for incoming traffic. Think of it as the "front door" of your proxy.
# In our Traefik configuration:
entrypoints:
http: # β This is just a custom name! You can call it anything
address: ":80" # Listen on port 80 for HTTP traffic
websecure: # β Another custom name
address: ":443" # Listen on port 443 for HTTPS traffic (if configured)
api: # β You could even call it "api" or "http" or "frontend"
address: ":8080" # Listen on port 8080
Important: http
is just a custom name that we chose. You could name your entrypoints anything:
http
,https
,frontend
,api
,public
- whatever makes sense to you!
When you specify traefik.http.routers.myapp.entrypoints=http
, you're telling Traefik:
"Route requests that come through the entrypoint named 'http' (which happens to be port 80) to my application"
The entrypoint name must match between:
- Traefik configuration (where you define
web: address: ":80"
) - Container labels (where you reference
entrypoints=web
)
The load balancer determines how traffic gets distributed to your actual application containers.
# This label creates a load balancer configuration:
- "traefik.http.services.myapp.loadbalancer.server.port=8080"
This tells Traefik:
"When routing to this service, send traffic to port 8080 on the container"
Here's how a request flows through Traefik:
1. [Browser] β http://myapp.docker
β
2. [Entrypoint :80] β "web" entrypoint receives the request
β
3. [Router] β Checks rule: Host(`myapp.docker`) β Match!
β
4. [Service] β Routes to the configured service
β
5. [Load Balancer] β Forwards to container port 8080
β
6. [Container] β Your app receives the request
While we typically use simple port mapping, Traefik's load balancer supports much more:
services:
# Multiple container instances (automatic load balancing)
web-app:
image: nginx:alpine
deploy:
replicas: 3 # 3 instances of the same app
labels:
- "traefik.http.routers.webapp.rule=Host(`webapp.docker`)"
- "traefik.http.routers.webapp.entrypoints=web"
# Traefik automatically balances between all 3 instances!
# Health check configuration
api-service:
image: myapi:latest
labels:
- "traefik.http.routers.api.rule=Host(`api.docker`)"
- "traefik.http.routers.api.entrypoints=web"
- "traefik.http.services.api.loadbalancer.server.port=3000"
# Configure health checks
- "traefik.http.services.api.loadbalancer.healthcheck.path=/health"
- "traefik.http.services.api.loadbalancer.healthcheck.interval=30s"
This separation of concerns provides powerful flexibility:
- Entrypoints: Control where Traefik listens (ports, protocols)
- Routers: Control which requests go where (domains, paths, headers)
- Services: Control how traffic reaches your apps (ports, health checks, load balancing)
Example of advanced routing:
services:
# Same app, different routing based on subdomain
app-v1:
image: myapp:v1
labels:
- "traefik.http.routers.app-v1.rule=Host(`v1.myapp.docker`)"
- "traefik.http.routers.app-v1.entrypoints=web"
- "traefik.http.services.app-v1.loadbalancer.server.port=8080"
app-v2:
image: myapp:v2
labels:
- "traefik.http.routers.app-v2.rule=Host(`v2.myapp.docker`)"
- "traefik.http.routers.app-v2.entrypoints=web"
- "traefik.http.services.app-v2.loadbalancer.server.port=8080"
# Route 90% traffic to v1, 10% to v2 (canary deployment)
app-main:
image: myapp:v1
labels:
- "traefik.http.routers.app-main.rule=Host(`myapp.docker`)"
- "traefik.http.routers.app-main.entrypoints=web"
- "traefik.http.services.app-main.loadbalancer.server.port=8080"
# Weight-based routing (advanced feature)
- "traefik.http.services.app-main.loadbalancer.server.weight=90"
The proxy automatically exposes both HTTP and HTTPS for all applications configured with VIRTUAL_HOST
. Both protocols are available without any additional configuration.
When you set VIRTUAL_HOST=myapp.local
, you automatically get:
- HTTP:
http://myapp.local
(port 80) - HTTPS:
https://myapp.local
(port 443)
services:
myapp:
image: nginx:alpine
environment:
- VIRTUAL_HOST=myapp.local # Creates both HTTP and HTTPS routes automatically
Traefik automatically generates self-signed certificates for HTTPS routes. For trusted certificates in development, you can use mkcert to generate wildcard certificates.
For browser-trusted certificates without warnings, generate wildcard certificates using mkcert (install with brew install mkcert
on macOS):
# Install the local CA
mkcert -install
# Create the certificates directory
mkdir -p ~/.local/spark/http-proxy/certs
# Generate wildcard certificate for .loc domains
mkcert -cert-file ~/.local/spark/http-proxy/certs/wildcard.loc.pem \
-key-file ~/.local/spark/http-proxy/certs/wildcard.loc-key.pem \
"*.loc"
# For complex multi-level domains, you can generate additional certificates:
# mkcert -cert-file ~/.local/spark/http-proxy/certs/sparkfabrik.loc.pem \
# -key-file ~/.local/spark/http-proxy/certs/sparkfabrik.loc-key.pem \
# "*.sparkfabrik.loc"
The certificates will be automatically detected and loaded when you start the proxy:
docker compose up -d
The Traefik container's entrypoint script scans ~/.local/spark/http-proxy/certs/
for certificate files and automatically generates the TLS configuration in /traefik/dynamic/auto-tls.yml
. You don't need to manually edit any configuration files!
Now your .loc
domains will use trusted certificates! π
β
https://myapp.loc
- Trusted
β
https://api.loc
- Trusted
β
https://project.loc
- Trusted
Note: The *.loc
certificate covers single-level subdomains. For multi-level domains like app.project.sparkfabrik.loc
, generate additional certificates as shown in the commented example above.
Traefik automatically matches certificates to incoming HTTPS requests using SNI (Server Name Indication):
-
Certificate Detection: The entrypoint script scans
/traefik/certs
and extracts domain information from each certificate's Subject Alternative Names (SAN) -
Automatic Matching: When a browser requests
https://myapp.loc
, Traefik:- Receives the domain name via SNI
- Looks through available certificates for one that matches
myapp.loc
- Finds the
*.loc
wildcard certificate and uses it - Serves the HTTPS response with the trusted certificate
-
Wildcard Coverage:
*.loc
covers:myapp.loc
,api.loc
,database.loc
*.loc
does NOT cover:sub.myapp.loc
,api.project.loc
- For multi-level domains, generate specific certificates like
*.project.loc
-
Fallback: If no matching certificate is found, Traefik generates a self-signed certificate for that domain
You can see which domains each certificate covers in the container logs when it starts up.
If you prefer to use Traefik labels instead of VIRTUAL_HOST
, you can achieve the same HTTP and HTTPS routes manually:
services:
myapp:
image: nginx:alpine
labels:
# HTTP router
- "traefik.http.routers.myapp.rule=Host(`myapp.local`)"
- "traefik.http.routers.myapp.entrypoints=http"
- "traefik.http.routers.myapp.service=myapp"
# HTTPS router
- "traefik.http.routers.myapp-tls.rule=Host(`myapp.local`)"
- "traefik.http.routers.myapp-tls.entrypoints=https"
- "traefik.http.routers.myapp-tls.tls=true"
- "traefik.http.routers.myapp-tls.service=myapp"
# Service configuration
- "traefik.http.services.myapp.loadbalancer.server.port=80"
This manual approach gives you the same result as VIRTUAL_HOST=myapp.local
but with more control over the configuration.
This HTTP proxy provides compatibility with the original dinghy-http-proxy environment variables:
Variable | Support | Description |
---|---|---|
VIRTUAL_HOST |
β Full | Automatic HTTP and HTTPS routing |
VIRTUAL_PORT |
β Full | Backend port configuration |
- Security:
exposedByDefault: false
ensures only containers withVIRTUAL_HOST
ortraefik.*
labels are managed - HTTPS: Unlike the original dinghy-http-proxy, HTTPS is automatically enabled for all
VIRTUAL_HOST
entries - Multiple domains: Comma-separated domains in
VIRTUAL_HOST
work the same way - Container selection: Unmanaged containers are completely ignored, preventing accidental exposure
The HTTP proxy includes a built-in DNS server that automatically resolves configured domains to localhost, eliminating the need to manually edit /etc/hosts
or configure system DNS.
The DNS server supports both Top-Level Domains (TLDs) and specific domains:
# docker-compose.yml
services:
dns:
environment:
# Configure which domains to handle (comma-separated)
- HTTP_PROXY_DNS_TLDS=loc,dev # Handle any *.loc and *.dev domains
- HTTP_PROXY_DNS_TLDS=spark.loc,api.dev # Handle only specific domains
- HTTP_PROXY_DNS_TLDS=loc # Handle any *.loc domains (default)
# Where to resolve domains (default: 127.0.0.1)
- HTTP_PROXY_DNS_TARGET_IP=127.0.0.1
# DNS server port (default: 19322)
- HTTP_PROXY_DNS_PORT=19322
Configure TLDs to handle any subdomain automatically:
# Environment: HTTP_PROXY_DNS_TLDS=loc
β
myapp.loc β 127.0.0.1
β
api.loc β 127.0.0.1
β
anything.loc β 127.0.0.1
β myapp.dev β Not handled
Support multiple development TLDs:
# Environment: HTTP_PROXY_DNS_TLDS=loc,dev,docker
β
myapp.loc β 127.0.0.1
β
api.dev β 127.0.0.1
β
service.docker β 127.0.0.1
Handle only specific domains for precise control:
# Environment: HTTP_PROXY_DNS_TLDS=spark.loc,api.dev
β
spark.loc β 127.0.0.1
β
api.dev β 127.0.0.1
β other.loc β Not handled
β different.dev β Not handled
To use the built-in DNS server, configure your system to use it for domain resolution:
# Configure systemd-resolved to use http-proxy DNS for .loc domains
sudo mkdir -p /etc/systemd/resolved.conf.d
sudo tee /etc/systemd/resolved.conf.d/http-proxy.conf > /dev/null <<EOF
[Resolve]
DNS=172.17.0.1:19322
Domains=~loc
EOF
# Restart systemd-resolved to apply changes
sudo systemctl restart systemd-resolved
# Verify configuration
systemd-resolve --status
REFUSED
responses in the logs. This doesn't affect functionality - external domains resolve through fallback mechanisms. Solutions:
- Accept current behavior (recommended): The
REFUSED
responses are correct and harmless - See systemd-resolved limitations documentation for details
# Configure specific domains (recommended)
sudo mkdir -p /etc/resolver
echo "nameserver 127.0.0.1" | sudo tee /etc/resolver/loc
echo "port 19322" | sudo tee -a /etc/resolver/loc
You can test DNS resolution manually without system configuration:
# Test with dig
dig @127.0.0.1 -p 19322 myapp.loc
# Test with nslookup
nslookup myapp.loc 127.0.0.1 19322
# Test with curl (using custom DNS)
curl --dns-servers 127.0.0.1:19322 http://myapp.loc
Monitor your HTTP proxy traffic with built-in Prometheus metrics and Grafana dashboards:
# Start with monitoring stack (Prometheus + Grafana)
spark-http-proxy start-with-metrics
Access the pre-configured Grafana dashboard at http://localhost:3000
(admin/admin):
The dashboard provides insights into:
- Request rates and response times
- HTTP status codes distribution
- Active connections and bandwidth usage
- Container routing statistics
Monitor routing rules and service health at http://localhost:8080
:
The Traefik dashboard shows:
- Active routes and services
- Real-time traffic flow
- Health check status
- Load balancer configuration
Both dashboards are automatically configured and ready to use with no additional setup required.