Skip to content

tetraf/hardware-query

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

4 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Hardware Query

A cross-platform Rust library for querying detailed system hardware information with advanced monitoring and power management capabilities.

πŸš€ Features

Core Hardware Detection

  • βœ… Cross-platform hardware detection (Windows, Linux, macOS)
  • βœ… Detailed CPU information (cores, threads, cache, features)
  • βœ… GPU detection and capabilities (CUDA, ROCm, DirectML support)
  • βœ… Memory configuration and status
  • βœ… Storage device enumeration and properties
  • βœ… Network interface detection and capabilities
  • βœ… Hardware acceleration support detection (NPU, TPU, FPGA)
  • βœ… PCI/USB device enumeration
  • βœ… ARM-specific hardware detection (Raspberry Pi, Jetson, etc.)

πŸ”„ Real-time Monitoring (NEW!)

  • βœ… Continuous hardware metrics monitoring
  • βœ… Configurable update intervals and thresholds
  • βœ… Event-driven notifications for thermal/power alerts
  • βœ… Background monitoring with async support
  • βœ… Comprehensive monitoring statistics

⚑ Power Management & Efficiency

  • βœ… Real-time power consumption tracking
  • βœ… Battery life estimation and health monitoring
  • βœ… Power efficiency scoring and optimization
  • βœ… Thermal throttling risk assessment
  • βœ… Power optimization recommendations

🌑️ Enhanced Thermal Management

  • βœ… Advanced temperature monitoring with history
  • βœ… Thermal throttling prediction algorithms
  • βœ… Cooling optimization recommendations
  • βœ… Sustained performance capability analysis
  • βœ… Fan curve analysis and optimization

🐳 Virtualization & Container Detection

  • βœ… Comprehensive virtualization environment detection
  • βœ… Container runtime identification (Docker, Kubernetes, etc.)
  • βœ… Resource limits and restrictions analysis
  • βœ… GPU passthrough capability detection
  • βœ… Performance impact assessment
  • βœ… Security feature analysis

🎯 AI/ML Optimization

  • βœ… Hardware suitability assessment for AI workloads
  • βœ… Performance scoring for different AI tasks
  • βœ… Memory and compute requirement analysis
  • βœ… Acceleration framework compatibility
  • βœ… Optimization recommendations

Quick Start

Add this to your Cargo.toml:

[dependencies]
hardware-query = "0.2.0"

# For real-time monitoring features
hardware-query = { version = "0.2.0", features = ["monitoring"] }

πŸ”§ Feature Flags

When to Use the monitoring Feature

Enable monitoring if you need:

  • βœ… Continuous hardware monitoring - Track temperatures, power, and performance over time
  • βœ… Real-time alerts - Get notified when hardware exceeds thresholds
  • βœ… Background monitoring - Async monitoring that doesn't block your application
  • βœ… Event-driven architecture - React to hardware changes as they happen
  • βœ… Server/system monitoring - Long-running applications that need to track hardware health

Skip monitoring if you only need:

  • ❌ One-time hardware queries (use SystemOverview::quick())
  • ❌ Static hardware information for configuration
  • ❌ Simple compatibility checks
  • ❌ Lightweight applications where async dependencies are unwanted
# Lightweight - no async dependencies
hardware-query = "0.2.0"

# Full featured - includes real-time monitoring
hardware-query = { version = "0.2.0", features = ["monitoring"] }

πŸš€ Simplified API (Recommended for New Users)

For most use cases, start with the simplified API:

use hardware_query::SystemOverview;

// Get essential system info in one line - no hardware expertise needed!
let overview = SystemOverview::quick()?;
println!("CPU: {} ({} cores)", overview.cpu.name, overview.cpu.cores);
println!("Memory: {:.1} GB", overview.memory_gb);
println!("Performance Score: {}/100", overview.performance_score);
println!("AI Ready: {}", overview.is_ai_ready());

For domain-specific needs:

use hardware_query::HardwarePresets;

// AI/ML assessment with built-in expertise
let ai_assessment = HardwarePresets::ai_assessment()?;
println!("AI Score: {}/100", ai_assessment.ai_score);

// Gaming performance recommendations
let gaming = HardwarePresets::gaming_assessment()?;
println!("Recommended: {} at {:?}", 
    gaming.recommended_settings.resolution,
    gaming.recommended_settings.quality_preset);

Advanced Usage

use hardware_query::HardwareInfo;

fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Get complete system information
    let hw_info = HardwareInfo::query()?;
    
    // Access CPU information
    let cpu = hw_info.cpu();
    println!("CPU: {} {} with {} cores, {} threads",
        cpu.vendor(),
        cpu.model_name(),
        cpu.physical_cores(),
        cpu.logical_cores()
    );
    
    // Check virtualization environment
    let virt = hw_info.virtualization();
    if virt.is_virtualized() {
        println!("Running in: {} (Performance impact: {:.1}%)", 
            virt.environment_type,
            (1.0 - virt.get_performance_factor()) * 100.0
        );
    }
    
    // Power management
    if let Some(power) = hw_info.power_profile() {
        println!("Power State: {}", power.power_state);
        if let Some(power_draw) = power.total_power_draw {
            println!("Current Power Draw: {:.1}W", power_draw);
        }
        
        // Get optimization recommendations
        let optimizations = power.suggest_power_optimizations();
        for opt in optimizations {
            println!("πŸ’‘ {}", opt.recommendation);
        }
    }
    
    // Thermal analysis
    let thermal = hw_info.thermal();
    if let Some(max_temp) = thermal.max_temperature() {
        println!("Max Temperature: {:.1}Β°C (Status: {})", 
            max_temp, thermal.thermal_status());
        
        // Predict thermal throttling
        let prediction = thermal.predict_thermal_throttling(1.0);
        if prediction.will_throttle {
            println!("⚠️  Thermal throttling predicted: {}", prediction.severity);
        }
        
        // Get cooling recommendations
        let cooling_recs = thermal.suggest_cooling_optimizations();
        for rec in cooling_recs.iter().take(3) {
            println!("🌑️ {}", rec.description);
        }
    }
    
    Ok(())
}

Advanced Real-time Monitoring

use hardware_query::{HardwareMonitor, MonitoringConfig};
use std::time::Duration;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Configure monitoring
    let mut config = MonitoringConfig::default();
    config.update_interval = Duration::from_secs(2);
    config.thermal_threshold = 75.0;
    
    let monitor = HardwareMonitor::with_config(config);
    
    // Add event callbacks
    monitor.on_event(|event| {
        match event {
            MonitoringEvent::ThermalAlert { sensor_name, temperature, .. } => {
                println!("🚨 Thermal Alert: {} at {:.1}°C", sensor_name, temperature);
            }
            MonitoringEvent::PowerAlert { current_power, .. } => {
                println!("⚑ Power Alert: {:.1}W", current_power);
            }
            _ => {}
        }
    }).await;
    
    // Start monitoring
    monitor.start_monitoring().await?;
    
    // Monitor for 60 seconds
    tokio::time::sleep(Duration::from_secs(60)).await;
    
    // Stop and get statistics
    monitor.stop_monitoring().await;
    let stats = monitor.get_stats().await;
    println!("Generated {} events ({} thermal alerts)", 
        stats.total_events, stats.thermal_alerts);
    
    Ok(())
}
// Get GPU information
for (i, gpu) in hw_info.gpus().iter().enumerate() {
    println!("GPU {}: {} {} with {} GB VRAM",
        i + 1,
        gpu.vendor(),
        gpu.model_name(),
        gpu.memory_gb()
    );
    
    // Check GPU capabilities
    println!("  CUDA support: {}", gpu.supports_cuda());
    println!("  ROCm support: {}", gpu.supports_rocm());
    println!("  DirectML support: {}", gpu.supports_directml());
}

// Memory information
let mem = hw_info.memory();
println!("System memory: {:.1} GB total, {:.1} GB available",
    mem.total_gb(),
    mem.available_gb()
);

// Storage information
for (i, disk) in hw_info.storage_devices().iter().enumerate() {
    println!("Disk {}: {} {:.1} GB ({:?} type)",
        i + 1,
        disk.model,
        disk.capacity_gb,
        disk.storage_type
    );
}

// Network interfaces
for interface in hw_info.network_interfaces() {
    println!("Network: {} - {}", 
        interface.name,
        interface.mac_address
    );
}

// Check specialized hardware
for npu in hw_info.npus() {
    println!("NPU detected: {} {}", npu.vendor(), npu.model_name());
}

for tpu in hw_info.tpus() {
    println!("TPU detected: {} {}", tpu.vendor(), tpu.model_name());
}

// Check ARM-specific hardware (Raspberry Pi, Jetson, etc.)
if let Some(arm) = hw_info.arm_hardware() {
    println!("ARM System: {}", arm.system_type);
}

// Check FPGA hardware
for fpga in hw_info.fpgas() {
    println!("FPGA: {} {} with {} logic elements",
        fpga.vendor,
        fpga.family,
        fpga.logic_elements.unwrap_or(0)
    );
}

// Serialize hardware information to JSON
let hw_json = hw_info.to_json()?;
println!("Hardware JSON: {}", hw_json);

Ok(())

}


## Specialized Hardware Support

The library provides comprehensive detection for AI/ML-oriented hardware:

- **NPUs**: Intel Movidius, GNA, XDNA; Apple Neural Engine; Qualcomm Hexagon
- **TPUs**: Google Cloud TPU and Edge TPU; Intel Habana
- **ARM Systems**: Raspberry Pi, NVIDIA Jetson, Apple Silicon with power management
- **FPGAs**: Intel/Altera and Xilinx families with AI optimization scoring

```rust
use hardware_query::HardwareInfo;

let hw_info = HardwareInfo::query()?;

// Check for specialized AI hardware
for npu in hw_info.npus() {
    println!("NPU: {} {}", 
        npu.vendor(), 
        npu.model_name()
    );
}

for tpu in hw_info.tpus() {
    println!("TPU: {} {}", 
        tpu.vendor(),
        tpu.model_name()
    );
}

Platform Support

Platform CPU GPU Memory Storage Network Battery Thermal PCI USB
Windows βœ… βœ… βœ… βœ… βœ… βœ… βœ… βœ… βœ…
Linux βœ… βœ… βœ… βœ… βœ… βœ… βœ… βœ… βœ…
macOS βœ… βœ… βœ… βœ… βœ… βœ… βœ… βœ… βœ…

Optional Features

Enable additional GPU support with feature flags:

[dependencies]
hardware-query = { version = "0.2.0", features = ["nvidia", "amd", "intel"] }
  • nvidia: NVIDIA GPU support via NVML
  • amd: AMD GPU support via ROCm
  • intel: Intel GPU support

Examples

See the examples directory for complete usage examples:

Building

# Build with default features
cargo build

# Build with GPU support
cargo build --features="nvidia,amd,intel"

# Run tests
cargo test

# Try the simplified API first (recommended for new users!)
cargo run --example 01_simplified_api

# Or run the basic hardware example
cargo run --example 02_basic_hardware

Dependencies

  • sysinfo - Cross-platform system information
  • serde - Serialization framework
  • thiserror - Error handling
  • num_cpus - CPU core detection

Optional Dependencies

  • nvml-wrapper - NVIDIA GPU support (feature: nvidia)
  • wmi - Windows Management Instrumentation (Windows only)
  • libc - Linux system calls (Linux only)
  • core-foundation - macOS system APIs (macOS only)

Performance

The library is designed for performance with:

  • Lazy evaluation of hardware information
  • Minimal system calls
  • Efficient data structures
  • Optional caching for repeated queries

Typical query times:

  • Complete hardware scan: 10-50ms
  • CPU information: 1-5ms
  • GPU information: 5-20ms
  • Memory information: 1-3ms

Error Handling

The library uses comprehensive error handling:

use hardware_query::{HardwareInfo, HardwareQueryError};

match HardwareInfo::query() {
    Ok(hw_info) => {
        // Use hardware information
    }
    Err(HardwareQueryError::SystemInfoUnavailable(msg)) => {
        eprintln!("System info unavailable: {}", msg);
    }
    Err(HardwareQueryError::PermissionDenied(msg)) => {
        eprintln!("Permission denied: {}", msg);
    }
    Err(e) => {
        eprintln!("Hardware query error: {}", e);
    }
}

Contributing

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

License

This project is licensed under the MIT OR Apache-2.0 License - see the LICENSE-MIT and LICENSE-APACHE files for details.

Acknowledgments

  • Built on top of the excellent sysinfo crate
  • Inspired by the need for better hardware detection in AI workload placement
  • Thanks to all contributors who helped improve cross-platform compatibility

About

Cross-platform Rust library for querying detailed system hardware information

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Rust 100.0%