Skip to content

Conversation

@holgerroth
Copy link
Collaborator

Fixes # .

Description

Adds an option for cross-site evaluation to FedAvgRecipe. Tested with existing client_with_eval.py in hello-pt example.

Types of changes

  • Non-breaking change (fix or new feature that would not break existing functionality).
  • Breaking change (fix or new feature that would cause existing functionality to change).
  • New tests added to cover the changes.
  • Quick tests passed locally by running ./runtest.sh.
  • In-line docstrings updated.
  • Documentation updated.

@holgerroth
Copy link
Collaborator Author

/build

@holgerroth
Copy link
Collaborator Author

/build

parser.add_argument("--n_clients", type=int, default=2)
parser.add_argument("--num_rounds", type=int, default=2)
parser.add_argument("--batch_size", type=int, default=16)
parser.add_argument("--train_script", type=str, default="client.py")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why make this an argument, are we keep changing this file ?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes it easier to use. Previous instructions required the user to change or "overwrite" the code in client.py...

Copy link
Collaborator

@chesterxgchen chesterxgchen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should be consistent, if we have cross-site eval, should add to all recipes particulate pt, tensorflow, numpy, adding to just one doesn't seem serve the purpose.

job.to_server(controller)

if self.cross_site_eval:
model_locator_id = job.to_server(PTFileModelLocator(pt_persistor_id=job.comp_ids["persistor_id"]))
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I realize this adds a dependency on PT. Having a utility function to add cross-site eval to an existing recipe, similar to how we do experiment tracking, might be a better option @chesterxgchen .

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I moved it to utils as

add_cross_site_evaluation(recipe, model_locator_type="pytorch")

and added option for numpy locator using a registry similar to tracking @YuanTingHsieh

@holgerroth holgerroth marked this pull request as draft September 17, 2025 13:49
@holgerroth
Copy link
Collaborator Author

/build

@holgerroth holgerroth marked this pull request as ready for review September 19, 2025 20:27
@holgerroth
Copy link
Collaborator Author

/build

@chesterxgchen
Copy link
Collaborator

@holgerroth lets not adding this 2.7.0 TP release, we are too late for this one

@holgerroth
Copy link
Collaborator Author

@holgerroth lets not adding this 2.7.0 TP release, we are too late for this one

Without cross-site evaluation workflow, this script will be not functional https://github.com/NVIDIA/NVFlare/blob/main/examples/hello-world/hello-pt/client_with_eval.py

@chesterxgchen
Copy link
Collaborator

chesterxgchen commented Sep 21, 2025 via email

@holgerroth holgerroth marked this pull request as draft October 1, 2025 02:38
@chesterxgchen
Copy link
Collaborator

@holgerroth is this still in draft status ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants