A cross-platform Rust library for querying detailed system hardware information with advanced monitoring and power management capabilities.
- β Cross-platform hardware detection (Windows, Linux, macOS)
- β Detailed CPU information (cores, threads, cache, features)
- β GPU detection and capabilities (CUDA, ROCm, DirectML support)
- β Memory configuration and status
- β Storage device enumeration and properties
- β Network interface detection and capabilities
- β Hardware acceleration support detection (NPU, TPU, FPGA)
- β PCI/USB device enumeration
- β ARM-specific hardware detection (Raspberry Pi, Jetson, etc.)
- β Continuous hardware metrics monitoring
- β Configurable update intervals and thresholds
- β Event-driven notifications for thermal/power alerts
- β Background monitoring with async support
- β Comprehensive monitoring statistics
- β Real-time power consumption tracking
- β Battery life estimation and health monitoring
- β Power efficiency scoring and optimization
- β Thermal throttling risk assessment
- β Power optimization recommendations
- β Advanced temperature monitoring with history
- β Thermal throttling prediction algorithms
- β Cooling optimization recommendations
- β Sustained performance capability analysis
- β Fan curve analysis and optimization
- β Comprehensive virtualization environment detection
- β Container runtime identification (Docker, Kubernetes, etc.)
- β Resource limits and restrictions analysis
- β GPU passthrough capability detection
- β Performance impact assessment
- β Security feature analysis
- β Hardware suitability assessment for AI workloads
- β Performance scoring for different AI tasks
- β Memory and compute requirement analysis
- β Acceleration framework compatibility
- β Optimization recommendations
Add this to your Cargo.toml:
[dependencies]
hardware-query = "0.2.0"
# For real-time monitoring features
hardware-query = { version = "0.2.0", features = ["monitoring"] }Enable monitoring if you need:
- β Continuous hardware monitoring - Track temperatures, power, and performance over time
- β Real-time alerts - Get notified when hardware exceeds thresholds
- β Background monitoring - Async monitoring that doesn't block your application
- β Event-driven architecture - React to hardware changes as they happen
- β Server/system monitoring - Long-running applications that need to track hardware health
Skip monitoring if you only need:
- β One-time hardware queries (use
SystemOverview::quick()) - β Static hardware information for configuration
- β Simple compatibility checks
- β Lightweight applications where async dependencies are unwanted
# Lightweight - no async dependencies
hardware-query = "0.2.0"
# Full featured - includes real-time monitoring
hardware-query = { version = "0.2.0", features = ["monitoring"] }For most use cases, start with the simplified API:
use hardware_query::SystemOverview;
// Get essential system info in one line - no hardware expertise needed!
let overview = SystemOverview::quick()?;
println!("CPU: {} ({} cores)", overview.cpu.name, overview.cpu.cores);
println!("Memory: {:.1} GB", overview.memory_gb);
println!("Performance Score: {}/100", overview.performance_score);
println!("AI Ready: {}", overview.is_ai_ready());For domain-specific needs:
use hardware_query::HardwarePresets;
// AI/ML assessment with built-in expertise
let ai_assessment = HardwarePresets::ai_assessment()?;
println!("AI Score: {}/100", ai_assessment.ai_score);
// Gaming performance recommendations
let gaming = HardwarePresets::gaming_assessment()?;
println!("Recommended: {} at {:?}",
gaming.recommended_settings.resolution,
gaming.recommended_settings.quality_preset);use hardware_query::HardwareInfo;
fn main() -> Result<(), Box<dyn std::error::Error>> {
// Get complete system information
let hw_info = HardwareInfo::query()?;
// Access CPU information
let cpu = hw_info.cpu();
println!("CPU: {} {} with {} cores, {} threads",
cpu.vendor(),
cpu.model_name(),
cpu.physical_cores(),
cpu.logical_cores()
);
// Check virtualization environment
let virt = hw_info.virtualization();
if virt.is_virtualized() {
println!("Running in: {} (Performance impact: {:.1}%)",
virt.environment_type,
(1.0 - virt.get_performance_factor()) * 100.0
);
}
// Power management
if let Some(power) = hw_info.power_profile() {
println!("Power State: {}", power.power_state);
if let Some(power_draw) = power.total_power_draw {
println!("Current Power Draw: {:.1}W", power_draw);
}
// Get optimization recommendations
let optimizations = power.suggest_power_optimizations();
for opt in optimizations {
println!("π‘ {}", opt.recommendation);
}
}
// Thermal analysis
let thermal = hw_info.thermal();
if let Some(max_temp) = thermal.max_temperature() {
println!("Max Temperature: {:.1}Β°C (Status: {})",
max_temp, thermal.thermal_status());
// Predict thermal throttling
let prediction = thermal.predict_thermal_throttling(1.0);
if prediction.will_throttle {
println!("β οΈ Thermal throttling predicted: {}", prediction.severity);
}
// Get cooling recommendations
let cooling_recs = thermal.suggest_cooling_optimizations();
for rec in cooling_recs.iter().take(3) {
println!("π‘οΈ {}", rec.description);
}
}
Ok(())
}use hardware_query::{HardwareMonitor, MonitoringConfig};
use std::time::Duration;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Configure monitoring
let mut config = MonitoringConfig::default();
config.update_interval = Duration::from_secs(2);
config.thermal_threshold = 75.0;
let monitor = HardwareMonitor::with_config(config);
// Add event callbacks
monitor.on_event(|event| {
match event {
MonitoringEvent::ThermalAlert { sensor_name, temperature, .. } => {
println!("π¨ Thermal Alert: {} at {:.1}Β°C", sensor_name, temperature);
}
MonitoringEvent::PowerAlert { current_power, .. } => {
println!("β‘ Power Alert: {:.1}W", current_power);
}
_ => {}
}
}).await;
// Start monitoring
monitor.start_monitoring().await?;
// Monitor for 60 seconds
tokio::time::sleep(Duration::from_secs(60)).await;
// Stop and get statistics
monitor.stop_monitoring().await;
let stats = monitor.get_stats().await;
println!("Generated {} events ({} thermal alerts)",
stats.total_events, stats.thermal_alerts);
Ok(())
}// Get GPU information
for (i, gpu) in hw_info.gpus().iter().enumerate() {
println!("GPU {}: {} {} with {} GB VRAM",
i + 1,
gpu.vendor(),
gpu.model_name(),
gpu.memory_gb()
);
// Check GPU capabilities
println!(" CUDA support: {}", gpu.supports_cuda());
println!(" ROCm support: {}", gpu.supports_rocm());
println!(" DirectML support: {}", gpu.supports_directml());
}
// Memory information
let mem = hw_info.memory();
println!("System memory: {:.1} GB total, {:.1} GB available",
mem.total_gb(),
mem.available_gb()
);
// Storage information
for (i, disk) in hw_info.storage_devices().iter().enumerate() {
println!("Disk {}: {} {:.1} GB ({:?} type)",
i + 1,
disk.model,
disk.capacity_gb,
disk.storage_type
);
}
// Network interfaces
for interface in hw_info.network_interfaces() {
println!("Network: {} - {}",
interface.name,
interface.mac_address
);
}
// Check specialized hardware
for npu in hw_info.npus() {
println!("NPU detected: {} {}", npu.vendor(), npu.model_name());
}
for tpu in hw_info.tpus() {
println!("TPU detected: {} {}", tpu.vendor(), tpu.model_name());
}
// Check ARM-specific hardware (Raspberry Pi, Jetson, etc.)
if let Some(arm) = hw_info.arm_hardware() {
println!("ARM System: {}", arm.system_type);
}
// Check FPGA hardware
for fpga in hw_info.fpgas() {
println!("FPGA: {} {} with {} logic elements",
fpga.vendor,
fpga.family,
fpga.logic_elements.unwrap_or(0)
);
}
// Serialize hardware information to JSON
let hw_json = hw_info.to_json()?;
println!("Hardware JSON: {}", hw_json);
Ok(())
}
## Specialized Hardware Support
The library provides comprehensive detection for AI/ML-oriented hardware:
- **NPUs**: Intel Movidius, GNA, XDNA; Apple Neural Engine; Qualcomm Hexagon
- **TPUs**: Google Cloud TPU and Edge TPU; Intel Habana
- **ARM Systems**: Raspberry Pi, NVIDIA Jetson, Apple Silicon with power management
- **FPGAs**: Intel/Altera and Xilinx families with AI optimization scoring
```rust
use hardware_query::HardwareInfo;
let hw_info = HardwareInfo::query()?;
// Check for specialized AI hardware
for npu in hw_info.npus() {
println!("NPU: {} {}",
npu.vendor(),
npu.model_name()
);
}
for tpu in hw_info.tpus() {
println!("TPU: {} {}",
tpu.vendor(),
tpu.model_name()
);
}
| Platform | CPU | GPU | Memory | Storage | Network | Battery | Thermal | PCI | USB |
|---|---|---|---|---|---|---|---|---|---|
| Windows | β | β | β | β | β | β | β | β | β |
| Linux | β | β | β | β | β | β | β | β | β |
| macOS | β | β | β | β | β | β | β | β | β |
Enable additional GPU support with feature flags:
[dependencies]
hardware-query = { version = "0.2.0", features = ["nvidia", "amd", "intel"] }nvidia: NVIDIA GPU support via NVMLamd: AMD GPU support via ROCmintel: Intel GPU support
See the examples directory for complete usage examples:
- π Simplified API (Start Here!) - Easy-to-use API for common scenarios
- Basic Hardware Detection - Fundamental hardware querying
- Enhanced Hardware - Advanced hardware analysis
- Comprehensive Hardware - Complete system analysis
- Advanced Usage - Expert-level features
- Comprehensive AI Hardware - AI/ML-specific analysis
- Enhanced Monitoring Demo - Real-time monitoring (requires
monitoringfeature)
# Build with default features
cargo build
# Build with GPU support
cargo build --features="nvidia,amd,intel"
# Run tests
cargo test
# Try the simplified API first (recommended for new users!)
cargo run --example 01_simplified_api
# Or run the basic hardware example
cargo run --example 02_basic_hardwaresysinfo- Cross-platform system informationserde- Serialization frameworkthiserror- Error handlingnum_cpus- CPU core detection
nvml-wrapper- NVIDIA GPU support (feature: nvidia)wmi- Windows Management Instrumentation (Windows only)libc- Linux system calls (Linux only)core-foundation- macOS system APIs (macOS only)
The library is designed for performance with:
- Lazy evaluation of hardware information
- Minimal system calls
- Efficient data structures
- Optional caching for repeated queries
Typical query times:
- Complete hardware scan: 10-50ms
- CPU information: 1-5ms
- GPU information: 5-20ms
- Memory information: 1-3ms
The library uses comprehensive error handling:
use hardware_query::{HardwareInfo, HardwareQueryError};
match HardwareInfo::query() {
Ok(hw_info) => {
// Use hardware information
}
Err(HardwareQueryError::SystemInfoUnavailable(msg)) => {
eprintln!("System info unavailable: {}", msg);
}
Err(HardwareQueryError::PermissionDenied(msg)) => {
eprintln!("Permission denied: {}", msg);
}
Err(e) => {
eprintln!("Hardware query error: {}", e);
}
}- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the MIT OR Apache-2.0 License - see the LICENSE-MIT and LICENSE-APACHE files for details.
- Built on top of the excellent
sysinfocrate - Inspired by the need for better hardware detection in AI workload placement
- Thanks to all contributors who helped improve cross-platform compatibility