Process telemetry where it originates. Store logs locally, extract metrics at the edge, and stream only the anomalies.
Eliminate 90% of data ingress costs without sacrificing visibility.
Get running in under 5 minutes
# Clone and start with Docker Compose
$ git clone https://github.com/sadhiappan/logfleet.git
$ cd logfleet && docker compose up -d
# Loki UI: localhost:3100 | Vector metrics: localhost:9598
# Logs stored locally, metrics extracted at edge
For years, the industry has preached a single mantra: ship all logs to the cloud. But for distributed systems, this model creates more problems than it solves.
Streaming gigabytes of raw logs from every location is financially unsustainable. Most of that data is noise that never gets queried.
When the network goes down, your central observability platform becomes useless precisely when you need it most.
Centralized architectures create centralized vulnerabilities. One cloud outage and visibility across your entire fleet goes dark.
LogFleet inverts the model.
Process at the edge. Store locally. Ship only what matters. Your logs stay where they're generated—ready for instant access when you need them.
Everything you need to observe distributed locations without breaking the bank on bandwidth.
Store 7-30 days of logs at each location. Ring buffer automatically rotates old data.
Convert logs to metrics at the edge. Ship summaries, not raw data. 100x bandwidth savings.
Full functionality during internet outages. Buffer and forward when connection returns.
Cloud users can stream full logs with one click. Open source users access logs via SSH or API.
Tailscale mesh VPN for secure remote access. Query any location without exposed ports.
Collect via syslog, HTTP, files, or Kubernetes. Built on Vector for 10x performance.
A simple, proven architecture for edge-first observability.
Vector collects logs from all sources. Loki stores them locally with 7-30 day retention.
Transform logs into metrics before shipping. Send summaries, not raw data.
Metrics go to your existing dashboards. Full logs stream on-demand when needed.
Everything you need is free. Self-host or use our managed cloud.
Enterprise add-ons available for AI analytics and security compliance.
Open Source
Free
Forever. No limits.
Enterprise
Custom
AI & security add-ons
Everything you need is open source—log streaming, metrics, fleet management, alerting.
Enterprise adds AI-powered insights and security compliance for regulated industries.
Everything you need to know about LogFleet. Can't find what you're looking for? Reach out to our team.
OpenTelemetry is a fantastic collection standard, and we use it! But OTEL was designed for cloud-native environments with reliable connectivity.
LogFleet adds the missing edge layer:
Think of LogFleet as the "edge-first" complement to your OTEL pipeline, not a replacement.
This is exactly what LogFleet was built for. During network outages:
Your edge locations become autonomous observability nodes, not just dumb collectors.
No vendor lock-in, by design:
The cloud tier adds convenience (one-click streaming, fleet UI), but the core agent is identical and fully open source.
LogFleet is designed for minimal footprint:
For high-volume sites (>100,000 events/sec), we recommend 2+ cores and SSD storage.
Built on Vector, LogFleet supports virtually any log source:
Protocol-based:
File-based:
Application-specific:
Missing a source? Vector has 50+ built-in sources, or write custom VRL transforms.
Security is layered throughout:
Edge Security:
Cloud Security:
Data Security:
Open Source (Self-hosted):
Cloud Free:
No credit card required. Upgrade when you need more nodes or features.
Simple, predictable pricing based on active nodes:
What's included per node:
No per-GB egress fees. No query-based pricing. No surprises.
Absolutely. LogFleet is designed as a complement, not a replacement:
Metrics export to:
Alerting integrations:
Log forwarding (when needed):
Your existing dashboards keep working. LogFleet feeds them edge-extracted metrics instead of requiring full log ingestion.
Absolutely. LogFleet supports both edge K8s clusters and traditional deployments:
For edge K8s (K3s, MicroK8s):
For non-K8s edge:
The same agent works in both environments, just with different deployment methods.
Still have questions?
Self-host with full power, or try the managed cloud experience.
Prefer managed hosting? Get notified when LogFleet Cloud launches.