Skip to content

ozontech/file.d

master
Switch branches/tags

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?
Code

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
 
 
 
 
cfg
 
 
 
 
 
 
 
 
e2e
 
 
fd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
tls
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

file.d

Overview

Maintenance CI Code coverage GitHub go.mod Go version of a Go module GoReportCard example

file.d is a blazing fast tool for building data pipelines: read, process, and output events. Primarily developed to read from files, but also supports numerous input/action/output plugins.

Although we use it in production, it still isn't v1.0.0. Please, test your pipelines carefully on dev/stage environments.

Contributing

file.d is an open-source project and contributions are very welcome! Please make sure to read our contributing guide before creating an issue and opening a PR!

Motivation

Well, we already have several similar tools: vector, filebeat, logstash, fluend-d, fluent-bit, etc.

Performance tests state that best ones achieve a throughput of roughly 100MB/sec. Guys, it's 2023 now. HDDs and NICs can handle the throughput of a few GB/sec and CPUs processes dozens of GB/sec. Are you sure 100MB/sec is what we deserve? Are you sure it is fast?

Main features

  • Fast: more than 10x faster compared to similar tools
  • Predictable: it uses pooling, so memory consumption is limited
  • Reliable: doesn't lose data due to commitment mechanism
  • Container / cloud / kubernetes native
  • Simply configurable with YAML
  • Prometheus-friendly: transform your events into metrics on any pipeline stage
  • Vault-friendly: store sensitive info and get it for any pipeline parameter
  • Well-tested and used in production to collect logs from Kubernetes cluster with 3000+ total CPU cores

Performance

On MacBook Pro 2017 with two physical cores file.d can achieve the following throughput:

  • 1.7GB/s in files > devnull case
  • 1.0GB/s in files > json decode > devnull case

TBD: throughput on production servers.

Plugins

Input: dmesg, fake, file, http, journalctl, k8s, kafka

Action: add_file_name, add_host, convert_date, convert_log_level, debug, discard, flatten, join, join_template, json_decode, json_encode, keep_fields, mask, modify, parse_es, parse_re2, remove_fields, rename, set_time, throttle

Output: devnull, elasticsearch, file, gelf, kafka, postgres, s3, splunk, stdout

What's next

Join our community in Telegram: https://t.me/file_d_community
Generated using insane-doc