Skip to content

Conversation

@sunilmut
Copy link
Member

For multi-VF scenario, the adapter index is getting reset to 0 across VFs and therefore the generated adapter indexes are not unique. This will cause problems for multi-VF scenarios.

Serialize the adapter index generation across VFs.

For multi-VF scenario, the adapter index is getting reset to
0 across VFs and therefore the generated adapter indexes are
not unique. This will cause problems for multi-VF scenarios.

Serialize the adapter index generation across VFs.
@sunilmut sunilmut requested a review from a team as a code owner January 17, 2026 00:01
Copilot AI review requested due to automatic review settings January 17, 2026 00:01
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This pull request addresses an issue where network adapter indexes were not unique across multiple VF (Virtual Function) managers in multi-VF scenarios. Previously, each VF manager independently generated adapter indexes starting from 0, causing collisions when multiple VFs were present.

Changes:

  • Introduces a shared NetworkAdapterIndex struct using atomic operations to serialize adapter index generation across all VF managers
  • Persists the adapter index counter in servicing state to maintain consistency across VM restarts
  • Updates all VF creation code paths to use the centralized index allocator

Reviewed changes

Copilot reviewed 4 out of 4 changed files in this pull request and generated 3 comments.

File Description
openhcl/underhill_core/src/emuplat/netvsp.rs Adds NetworkAdapterIndex struct with atomic counter; replaces per-VF enumerate() with shared next() calls
openhcl/underhill_core/src/servicing.rs Adds network_adapter_index field to servicing state for persistence across VM lifecycles
openhcl/underhill_core/src/dispatch/mod.rs Threads NetworkAdapterIndex parameter through trait methods and adds it to LoadedVm struct
openhcl/underhill_core/src/worker.rs Creates NetworkAdapterIndex instance and passes it to all network initialization functions

Comment on lines 1111 to 1132
pub struct NetworkAdapterIndex {
index: AtomicU32,
}

impl NetworkAdapterIndex {
pub fn new(initial_value: Option<u32>) -> Self {
Self {
// Adapter index is used to generate the serial number for the
// guest and there are various guest code that treat a serial number
// of '0' as invalid. Start at 1 to avoid that.
index: AtomicU32::new(initial_value.unwrap_or(1)),
}
}

pub fn next(&self) -> u32 {
self.index.fetch_add(1, Ordering::Relaxed)
}

pub fn get(&self) -> u32 {
self.index.load(Ordering::Relaxed)
}
}
Copy link

Copilot AI Jan 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The adapter index allocation changes across servicing/restore cycles, which may cause issues with guest OS device identification. When the VM is restored from saved state, all network adapters are recreated and will receive new adapter indexes that are different from their pre-save values.

For example, if VF1's vports had adapter indexes 1 and 2 before servicing, they will receive indexes 5 and 6 after restore (assuming the counter was saved at 5). Since adapter indexes are used to generate VF serial numbers presented to the guest OS (see get_guest_vf_serial_number in netvsp), changing these values across servicing could cause the guest to misidentify network adapters, especially in keep-alive scenarios where the guest expects devices to remain stable.

Consider either: (1) saving the allocated adapter indexes per VF and restoring them explicitly, or (2) documenting that adapter index changes across servicing are expected and acceptable for the guest OS.

Copilot uses AI. Check for mistakes.
erfrimod
erfrimod previously approved these changes Jan 20, 2026
@github-actions
Copy link

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants