A high-performance NetFlow/IPFIX/sFlow collector written in Rust, designed to replace GoFlow2 with better performance and memory safety.
- NetFlow Support: v5, v9, and IPFIX protocols
- sFlow Support: v5 with expanded flow and counter samples
- Enterprise Fields: Support for vendor-specific fields (Cisco, Silver Peak, etc.)
- Async Architecture: Built on Tokio for high-performance I/O
- Modular Design: Pluggable decoders, formatters, producers, and transporters
- HTTP Metrics Server: Health checks and metrics endpoints
- Configuration: YAML-based configuration with sensible defaults
- Logging: Structured logging with configurable levels
- Error Handling: Comprehensive error handling with anyhow/thiserror
- Performance benchmarks and optimization
- Kafka transport implementation
- Additional enterprise field vendors
- Comprehensive test coverage
- Rust 1.70+ and Cargo
- Network devices configured to send NetFlow/sFlow to your collector
-
Clone the repository:
git clone https://github.com/modev2301/rustflow.git cd rustflow -
Build the project:
cargo build --release
-
Run with default configuration:
./target/release/rustflow
Create a config.yaml file:
# HTTP metrics server address
metrics_addr: "127.0.0.1:8080"
# Collector configurations
collectors:
netflow: "0.0.0.0:2055"
sflow: "0.0.0.0:6343"
# Logging configuration
logging:
level: "info" # debug, info, warn, error
structured: true
# Output configuration
output:
format: "json" # json, binary, text
producer: "raw" # raw, proto
transport: "file" # file, kafka
file_path: "flows.json"
kafka:
brokers:
- "localhost:9092"
topic: "netflow"
key: null
# Performance configuration
performance:
buffer_size: 9000
worker_threads: 4
batch_size: 1000# Run with custom config file
rustflow --config my-config.yaml
# Enable debug logging
rustflow --debug
# Add additional collectors via command line
rustflow --listen netflow:0.0.0.0:2056 --listen sflow:0.0.0.0:6344
# Show help
rustflow --help-
Decoders: Parse NetFlow/sFlow packets into structured data
NetFlowDecoder: Handles NetFlow v5, v9, and IPFIXSFlowDecoder: Handles sFlow v5 with various sample types
-
Formatters: Convert flow records to different output formats
JsonFormatter: JSON output with metadataBinaryFormatter: Protobuf binary formatTextFormatter: Human-readable text format
-
Producers: Process and transform flow records
RawProducer: Raw flow dataProtoProducer: Protobuf-encoded data
-
Transporters: Send data to various destinations
FileTransporter: Write to filesKafkaTransporter: Send to Kafka topics
RustFlow supports vendor-specific enterprise fields through NetFlow v9 and IPFIX:
- Cisco (PEN: 9): MPLS label fields, QoS information
- Silver Peak (PEN: 23867): WAN optimization metrics
Enterprise fields are included in all output formats:
JSON Format:
{
"type": "NETFLOW_V9",
"src_addr": "192.168.1.1",
"dst_addr": "192.168.1.2",
"enterprise_fields": {
"enterprise_9_1": "deadbeef",
"enterprise_23867_1": "12345678"
}
}Text Format:
Flow Record:
Type: NETFLOW_V9
Source: 192.168.1.1:80
Destination: 192.168.1.2:443
Enterprise Fields:
Cisco (PEN=9, Type=1): deadbeef
Silver Peak (PEN=23867, Type=1): 12345678
Binary Format: Enterprise fields are included in the protobuf message structure.
Network Device → UDP Socket → Decoder → Formatter → Producer → Transporter → Output
Run performance benchmarks:
cargo bench- Memory Usage: Significantly lower than GoFlow2 due to Rust's zero-cost abstractions
- CPU Usage: Optimized for high-throughput scenarios
- Latency: Sub-millisecond packet processing
- Throughput: Designed to handle 100k+ flows/second
curl http://localhost:8080/healthResponse:
{
"status": "healthy",
"timestamp": "2024-01-01T12:00:00Z"
}curl http://localhost:8080/metricsResponse:
{
"rustflow_packets_received_total": 12345,
"rustflow_flows_processed_total": 67890,
"rustflow_errors_total": 0,
"rustflow_uptime_seconds": 3600
}curl http://localhost:8080/Response:
{
"name": "rustflow",
"version": "0.1.0",
"description": "A NetFlow/IPFIX/sFlow collector in Rust"
}# Debug build
cargo build
# Release build
cargo build --release
# Run tests
cargo test
# Run benchmarks
cargo benchTo add support for a new vendor's enterprise fields:
-
Add the PEN (Private Enterprise Number) to the vendor mapping in
src/decoders/netflow.rs:match enterprise_id { 9 => self.handle_cisco_enterprise_fields(record, field_type, data)?, 23867 => self.handle_silverpeak_enterprise_fields(record, field_type, data)?, 12345 => self.handle_new_vendor_enterprise_fields(record, field_type, data)?, // Add here _ => { debug!("Unknown enterprise: PEN={}, field_type={}", enterprise_id, field_type); } }
-
Implement the handler function:
fn handle_new_vendor_enterprise_fields(&self, record: &mut FlowRecord, field_type: u16, data: &[u8]) -> Result<()> { match field_type { 1 => { // Handle field type 1 let value = self.read_u32(data)?; record.enterprise_fields.insert((12345, 1), data.to_vec()); debug!("New Vendor Field 1: {}", value); } _ => { debug!("Unknown New Vendor enterprise field: {}", field_type); } } Ok(()) }
-
Update the text formatter to include the vendor name in
src/format/text.rs:let vendor_name = match pen { 9 => "Cisco", 23867 => "Silver Peak", 12345 => "New Vendor", // Add here _ => "Unknown", };
rustflow/
├── src/
│ ├── main.rs # Application entry point
│ ├── lib.rs # Library exports
│ ├── config.rs # Configuration management
│ ├── decoders/ # Protocol decoders
│ │ ├── mod.rs
│ │ ├── netflow.rs # NetFlow/IPFIX decoder
│ │ ├── sflow.rs # sFlow decoder
│ │ └── utils.rs # Decoder utilities
│ ├── format/ # Output formatters
│ │ ├── mod.rs
│ │ ├── json.rs # JSON formatter
│ │ ├── binary.rs # Binary formatter
│ │ └── text.rs # Text formatter
│ ├── producer/ # Data producers
│ │ ├── mod.rs
│ │ ├── raw.rs # Raw producer
│ │ └── proto.rs # Protobuf producer
│ └── transport/ # Data transporters
│ ├── mod.rs
│ ├── file.rs # File transport
│ └── kafka.rs # Kafka transport
├── benches/ # Performance benchmarks
├── proto/ # Protocol buffer definitions
├── config.yaml # Sample configuration
└── Cargo.toml # Dependencies and metadata
| Feature | GoFlow2 | RustFlow |
|---|---|---|
| Language | Go | Rust |
| Memory Safety | GC | Zero-cost abstractions |
| Performance | Good | Excellent |
| Memory Usage | Higher | Lower |
| Concurrency | Goroutines | Async/await |
| Type Safety | Good | Excellent |
| Compile Time | Fast | Slower |
| Runtime | GC | No GC |
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
- Follow Rust coding standards
- Add tests for new features
- Update documentation
- Run benchmarks for performance-critical changes
This project is licensed under the MIT License - see the LICENSE file for details.
- Complete IPFIX template handling
- Kafka transport implementation
- Docker containerization
- Kubernetes deployment examples
- Prometheus metrics integration
- Advanced filtering and aggregation
- Plugin system for custom decoders
- Web UI for monitoring and configuration