| Crates.io | huginn-net-http |
| lib.rs | huginn-net-http |
| version | 1.7.2 |
| created_at | 2025-10-04 10:08:39.362643+00 |
| updated_at | 2026-01-10 15:45:38.557834+00 |
| description | HTTP fingerprinting (p0f-style) analysis for huginn-net |
| homepage | https://github.com/biandratti/huginn-net |
| repository | https://github.com/biandratti/huginn-net |
| max_upload_size | |
| id | 1867759 |
| size | 426,001 |
This crate provides HTTP-based passive fingerprinting capabilities. It analyzes HTTP/1.x and HTTP/2 headers to identify browsers, web servers, and detect preferred languages.
ObservableHttpRequest, ObservableHttpResponse) without being limited to predefined p0f signaturesThis crate includes an Akamai HTTP/2 fingerprint parser that extracts fingerprints from HTTP/2 connection frames (SETTINGS, WINDOW_UPDATE, PRIORITY, HEADERS) following the Blackhat EU 2017 specification.
Important Design Consideration:
Unlike p0f HTTP fingerprinting (which normalizes header order), Akamai fingerprinting requires preserving the original header order from the HTTP/2 frames. This is because:
:method, :path, :authority, :scheme) is a critical component of the Akamai fingerprintWhy it's not integrated into the main processing pipeline:
Due to this requirement for preserving original header order, the Akamai fingerprint extractor is provided as a standalone utility (Http2FingerprintExtractor) rather than being integrated into the main HTTP processing pipeline. The main pipeline normalizes and processes headers for p0f-style fingerprinting, which would corrupt the original ordering needed for Akamai fingerprints.
Usage:
use huginn_net_http::http2_fingerprint_extractor::Http2FingerprintExtractor;
let mut extractor = Http2FingerprintExtractor::new();
// Add HTTP/2 data incrementally (handles connection preface automatically)
extractor.add_bytes(&http2_data)?;
if let Some(fingerprint) = extractor.get_fingerprint() {
println!("Akamai fingerprint: {}", fingerprint.fingerprint);
println!("Fingerprint hash: {}", fingerprint.hash);
}
This design allows you to extract Akamai fingerprints before TLS termination or in scenarios where you need to preserve the exact original frame structure, while still using the main pipeline for standard HTTP/1.x and HTTP/2 analysis with p0f-style fingerprinting.
Note: Live packet capture requires
libpcap(usually pre-installed on Linux/macOS).
Add this to your Cargo.toml:
[dependencies]
huginn-net-http = "1.7.2"
huginn-net-db = "1.7.2"
use huginn_net_db::Database;
use huginn_net_http::{FilterConfig, HuginnNetHttp, HuginnNetHttpError, IpFilter, PortFilter, HttpAnalysisResult};
use std::sync::{Arc, mpsc};
fn main() -> Result<(), HuginnNetHttpError> {
// Load database for browser/server fingerprinting
let db = match Database::load_default() {
Ok(db) => Arc::new(db),
Err(e) => {
eprintln!("Failed to load database: {e}");
return Err(HuginnNetHttpError::Parse(format!("Database error: {e}")));
}
};
// Create analyzer
let mut analyzer = match HuginnNetHttp::new(Some(db), 1000) {
Ok(analyzer) => analyzer,
Err(e) => {
eprintln!("Failed to create analyzer: {e}");
return Err(e);
}
};
// Optional: Configure filters (can be combined)
if let Ok(ip_filter) = IpFilter::new().allow("192.168.1.0/24") {
let filter = FilterConfig::new()
.with_port_filter(PortFilter::new().destination(80))
.with_ip_filter(ip_filter);
analyzer = analyzer.with_filter(filter);
}
let (sender, receiver) = mpsc::channel::<HttpAnalysisResult>();
// Live capture (use parallel mode for high throughput)
std::thread::spawn(move || {
if let Err(e) = analyzer.analyze_network("eth0", sender, None) {
eprintln!("Analysis error: {e}");
}
});
// Or PCAP analysis (always use sequential mode)
// std::thread::spawn(move || {
// if let Err(e) = analyzer.analyze_pcap("capture.pcap", sender, None) {
// eprintln!("Analysis error: {e}");
// }
// });
for result in receiver {
if let Some(http_request) = result.http_request { println!("{http_request}"); }
if let Some(http_response) = result.http_response { println!("{http_response}"); }
}
Ok(())
}
For a complete working example with signal handling, error management, and CLI options, see examples/capture-http.rs.
The library supports packet filtering to reduce processing overhead and focus on specific traffic. Filters can be combined using AND logic (all conditions must match):
Filter Types:
All filters support both Allow (allowlist) and Deny (denylist) modes. See the filter documentation for complete details.
[HTTP Request] 1.2.3.4:1524 → 4.3.2.1:80
Browser: Firefox:10.x or newer
Lang: English
Params: none
Sig: 1:Host,User-Agent,Accept=[,*/*;q=],?Accept-Language=[;q=],Accept-Encoding=[gzip, deflate],?DNT=[1],Connection=[keep-alive],?Referer:Accept-Charset,Keep-Alive:Firefox/
[HTTP Response] 192.168.1.22:58494 → 91.189.91.21:80
Server: nginx/1.14.0 (Ubuntu)
Params: anonymous
Sig: server=[nginx/1.14.0 (Ubuntu)],date=[Tue, 17 Dec 2024 13:54:16 GMT],x-cache-status=[from content-cache-1ss/0],connection=[close]:Server,Date,X-Cache-Status,Connection:
This crate is part of the Huginn Net ecosystem. For multi-protocol analysis, see huginn-net. For protocol-specific analysis:
For complete documentation, examples, and integration guides, see the main huginn-net README.
Dual-licensed under MIT or Apache 2.0.