Skip to content

Conversation

@arshnrr
Copy link
Contributor

@arshnrr arshnrr commented May 1, 2025

Introduced a new utility function compare_annotations to the spac.transformations module. It computes pairwise similarity metrics between multiple clustering annotations in an AnnData object and visualize the results as a heatmap.

Write up:
spac.transformations.compare_annotations(adata: AnnData, annotation_list: list, metric: str = "adjusted_rand_score") → plotly.graph_objects.Figure
Compare multiple cluster annotations using a similarity metric and visualize the result as a heatmap.
This function computes pairwise similarity scores between specified cluster annotations in an AnnData object using either the Adjusted Rand Index or Normalized Mutual Information. It stores the resulting similarity matrix and the list of compared annotations in .uns and returns a heatmap of the scores.
Parameters
adata (AnnData) – The input AnnData object containing cluster annotations in .obs.
annotation_list (list) – A list of column names from .obs representing different cluster annotations to compare.
metric (str, optional) – The metric used for comparison. Must be either "adjusted_rand_score" or "normalized_mutual_info_score". Default is "adjusted_rand_score".
Returns
A heatmap figure visualizing the similarity scores between all pairs of annotations.
Return type
plotly.graph_objects.Figure

@abombin
Copy link
Collaborator

abombin commented Jun 6, 2025

I think it looks fine.
One thing that might be useful to adjust is to allow a user to define the name of the adata.uns slot, where the index matrix is stored, instead of having hard coded adata.uns["compare_annotations"].
This will allow to save an information on what index was calculated.
Or perhaps implement some other approach for storing this information.

@abombin abombin requested a review from Copilot June 6, 2025 16:28
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR introduces a new utility function, compare_annotations, to compute pairwise similarity metrics between clustering annotations in an AnnData object and generate a heatmap visualization, along with corresponding unit tests.

  • Added compare_annotations in spac/transformations.py to compute similarity scores using adjusted_rand_score or normalized_mutual_info_score.
  • Implemented comprehensive unit tests in tests/test_transformations/test_compare_annotations.py to validate input handling, metric calculations, and heatmap generation.

Reviewed Changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated no comments.

File Description
tests/test_transformations/test_compare_annotations.py Added unit tests covering error conditions, metric calculations, and heatmap outputs
src/spac/transformations.py Added compare_annotations function to compute similarity metrics and produce a heatmap
Comments suppressed due to low confidence (3)

src/spac/transformations.py:1198

  • The return type in the docstring incorrectly mentions 'plt.figure'. Since the function returns a Plotly Figure, update the docstring to reflect the correct return type (plotly.graph_objects.Figure).
fig : plt.figure

src/spac/transformations.py:1234

  • The numpy module is used via 'np.array' but numpy is not imported in this file. Add 'import numpy as np' at the top of the file.
matrix_final = np.array(matrix)

src/spac/transformations.py:1209

  • The function 'check_annotation' is used without being defined or imported in this module. Ensure it is properly imported or defined.
check_annotation(adata, annotations=ann)

@ramyap06 ramyap06 mentioned this pull request Oct 10, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants