Skip to content

Conversation

@chris-absmartly
Copy link
Collaborator

@chris-absmartly chris-absmartly commented Dec 12, 2025

release notes for december 2025

Summary by CodeRabbit

  • Documentation

    • Published December 2025 release notes covering metrics governance, guidance to update metrics missing metadata, and planned enhancements.
  • New Features

    • Metric versioning foundations with editable vs non-editable fields.
    • Enhanced metric metadata (Unit, Application, Metric Category).
    • New Metric View and redesigned metric selection for improved discoverability and smarter defaults.
    • Coming soon: CUPED support, metric lifecycle, approval workflows and usage reporting.

✏️ Tip: You can customise this high-level summary in your review settings.

@netlify
Copy link

netlify bot commented Dec 12, 2025

Deploy Preview for absmartly-docs ready!

Name Link
🔨 Latest commit 3b927aa
🔍 Latest deploy log https://app.netlify.com/projects/absmartly-docs/deploys/6941332a3eecad00084cf354
😎 Deploy Preview https://deploy-preview-239--absmartly-docs.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

@coderabbitai
Copy link

coderabbitai bot commented Dec 12, 2025

Walkthrough

A new MDX release notes file was added at docs/platform-release-notes/2025/12.mdx describing December 2025 updates focused on metrics governance. It documents General improvements (Metric Categories type, new metric metadata fields such as Unit type and Application, Metric category), a Metric View page, Improved Metric Discoverability (usability redesign of metric selection, smarter default metrics, usage insights), Metric Versioning foundations (versioning 1.0, editable vs non-editable fields, new endpoints), and a What’s Next section (CUPED support, metric lifecycle, approval workflows, usage reporting). Guidance for updating existing metrics with missing metadata is included.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~3 minutes

  • Documentation-only addition (single MDX file)
  • No code, exported or public API changes
  • Review focus: frontmatter/MDX formatting, grammar, and consistency with existing release notes

Potential attention points:

  • Ensure MDX frontmatter and styling match existing release notes
  • Verify consistent terminology for Metric Categories, Metric View, CUPED, and metric versioning

Suggested reviewers

  • marcio-absmartly
  • mario-silva
  • bmsilva
  • calthejuggler

Poem

🐰✨ I hopped through notes in crisp December light,
Categories aligned and metadata set right,
Versions queued, discoverability bright,
CUPED whispers next in the night,
A rabbit cheers — metrics take flight!

Pre-merge checks and finishing touches

❌ Failed checks (1 inconclusive)
Check name Status Explanation Resolution
Title check ❓ Inconclusive The title 'december RN' is vague and uses non-descriptive abbreviations that don't clearly convey the changeset content to someone scanning history. Expand the title to be more descriptive, such as 'Add December 2025 platform release notes with metrics governance updates' to provide meaningful context.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch december25-release-notes

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4030932 and 2c74684.

📒 Files selected for processing (1)
  • docs/platform-release-notes/2025/12.mdx (1 hunks)
🧰 Additional context used
🪛 LanguageTool
docs/platform-release-notes/2025/12.mdx

[uncategorized] ~13-~13: Possible missing comma found.
Context: ...We've made some general improvements to metric which you will see across the platform....

(AI_HYDRA_LEO_MISSING_COMMA)


[style] ~16-~16: Would you like to use the Oxford spelling “categorize”? The spelling ‘categorise’ is also correct.
Context: ...onfiguration types which can be used to categorise and group metrics. This new metric cate...

(OXFORD_SPELLING_Z_NOT_S)


[uncategorized] ~19-~19: Loose punctuation mark.
Context: ...your ABsmartly instance: - Conversion: Measures whether users complete a desir...

(UNLIKELY_OPENING_PUNCTUATION)


[uncategorized] ~20-~20: Loose punctuation mark.
Context: ...s complete a desired action. - Revenue: Captures direct monetary impact. - `Eng...

(UNLIKELY_OPENING_PUNCTUATION)


[uncategorized] ~21-~21: Loose punctuation mark.
Context: ...s direct monetary impact. - Engagement: Reflects how actively users interact wi...

(UNLIKELY_OPENING_PUNCTUATION)


[uncategorized] ~22-~22: Loose punctuation mark.
Context: ...interact with the product. - Retention: Shows whether users come back or contin...

(UNLIKELY_OPENING_PUNCTUATION)


[uncategorized] ~23-~23: Loose punctuation mark.
Context: ...g the product over time. - Performance: Measures speed and responsiveness, such...

(UNLIKELY_OPENING_PUNCTUATION)


[uncategorized] ~24-~24: Loose punctuation mark.
Context: ...as load time or latency. - Reliability: Tracks stability and correctness, inclu...

(UNLIKELY_OPENING_PUNCTUATION)


[uncategorized] ~25-~25: Loose punctuation mark.
Context: ..., failures, or availability. - Quality: Represents outcome quality or user expe...

(UNLIKELY_OPENING_PUNCTUATION)


[typographical] ~31-~31: After the expression ‘for example’ a comma is usually used.
Context: ...on(s) where this metric make sense. For example an app_crashes metrics only makes sen...

(COMMA_FOR_EXAMPLE)


[uncategorized] ~34-~34: Use a comma before ‘but’ if it connects two independent clauses (unless they are closely connected and short).
Context: ...ee above. All those fields are optional but we recommend you update your existing m...

(COMMA_COMPOUND_SENTENCE)


[grammar] ~48-~48: Did you mean “totally redesigned” or “to totally redesign”?
Context: ...nts. ### Usability improvement We have totally redesign the metric selection step of the experi...

(HAVE_VB_DT)


[uncategorized] ~70-~70: Possible missing article found.
Context: ...inition of a metric change. - Creating new version of a metric will not impact pas...

(AI_HYDRA_LEO_MISSING_A)


[style] ~72-~72: Consider using “outdated” to strengthen your wording.
Context: ...ures cannot be started when they use an old version of a metric. Experimenters will...

(OLD_VERSION)


[uncategorized] ~75-~75: Possible missing comma found.
Context: ...s New Version With the launch of metric versioning some fields can be edited in the curren...

(AI_HYDRA_LEO_MISSING_COMMA)


[misspelling] ~75-~75: It seems that the plural noun “others” fits better in this context.
Context: ...n the current version of the metric and other will require a new version to be create...

(OTHER_OTHERS)


[style] ~75-~75: Consider a more concise word here.
Context: ...ill require a new version to be created in order to be updated. - Editable fields: Fie...

(IN_ORDER_TO_PREMIUM)


[typographical] ~80-~80: Consider adding a comma after this introductory phrase.
Context: ...le to change those fields. As a metric owner you will be able to edit and **crea...

(AS_A_NN_COMMA)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: Redirect rules - absmartly-docs
  • GitHub Check: Yarn Build
  • GitHub Check: Header rules - absmartly-docs
  • GitHub Check: Pages changed - absmartly-docs

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (2)
docs/platform-release-notes/2025/12.mdx (2)

1-1: Remove unused Image import (or use it).
Image is imported but not used; this can break MDX lint/build depending on rules.

-import Image from "../../../src/components/Image";

13-58: Fix remaining typos/grammar (public release notes).
A few issues read as unpolished / ambiguous (“configuration types”, “experimemts”, “temmplates”, “throught”, subject–verb agreement, casing). Suggested minimal edits:

-We've made some general improvements to Metrics, which you will see across the platform.
+We've made some general improvements to metrics that you’ll see across the platform.

-We've added a new configuration types which will help categorise and group metrics. Those new metric categories will make it easier to find the right metrics when creating an experiment.
+We've added a new configuration type that helps categorise and group metrics. These new metric categories make it easier to find the right metrics when creating an experiment.

-### New metric's metadata fields  
-We've added some new metadata fields to metrics which will help with discoverability and filtering of metrics across the platform. This includes:
+### New metrics metadata fields
+We've added new metadata fields to metrics that help with discoverability and filtering across the platform. This includes:

-- **Unit type**: This is the list of Unit type(s) for which this metric is computed. Setting the correct Unit type(s) will help experimenters choose the right metric for their experiments. (e.g. user_id, device_id)
-- **Application**: This is the list of Application(s) where this metric make sense. For example, an `app_crashes` metrics only makes sense for experimemts running on app platforms. 
+- **Unit type**: This is the list of unit type(s) for which this metric is computed. Setting the correct unit type(s) will help experimenters choose the right metric for their experiments (e.g. `user_id`, `device_id`).
+- **Application**: This is the list of application(s) where this metric makes sense. For example, an `app_crashes` metric only makes sense for experiments running on app platforms.

-All those fields are optional, but we recommend you update your existing metrics as this will improve general discoverability of your metrics.
+All these fields are optional, but we recommend updating your existing metrics, as this will improve overall discoverability.

-We’ve made it easier to find, understand, and select the right metrics when creating your experiments/temmplates/features.
+We’ve made it easier to find, understand, and select the right metrics when creating your experiments/templates/features.

-Metrics can now also easily be searched by name, tags, owners, etc so you don't have to scroll throught your long list of existing metrics to find what you are looking for.
+Metrics can now also be searched by name, tags, owners, etc., so you don't have to scroll through a long list of existing metrics to find what you’re looking for.
🧹 Nitpick comments (1)
docs/platform-release-notes/2025/12.mdx (1)

62-89: Tighten “versioning 1.0” wording + API caution (avoid policy confusion).
Currently “only latest discoverable” + “cannot be started when outdated version used” can be read as “older versions are invisible but block starts”. Consider clarifying where old versions can be seen, and fix a couple of grammatical issues.

 This can be done, for example, when the definition of a metric change.
+This can be done, for example, when the definition of a metric changes.

-- Only the latest version of a metric will be discoverable and can be added to new experiments. Experimenters will only be able to see the latest version of each metric.
+- Only the latest version of a metric can be added to new experiments (older versions remain visible on the Metric View page for reference).

 As a metric owner, you will be able to **edit** and **create new version** from the new Metric view page.
+As a metric owner, you will be able to **edit** and **create a new version** from the new Metric View page.

 If you are using our API to edit your metrics, you will need you update your script as you will no longer be able to edit all metric fields using the edit end-point.
+If you are using our API to edit your metrics, you will need to update your scripts as you will no longer be able to edit all metric fields using the edit endpoint.

-A new end-point for creating new metric version is now available if needed.
+A new endpoint for creating a new metric version is now available if needed.
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c9bde7b and e44187c.

📒 Files selected for processing (1)
  • docs/platform-release-notes/2025/12.mdx (1 hunks)
🧰 Additional context used
🪛 LanguageTool
docs/platform-release-notes/2025/12.mdx

[style] ~16-~16: Would you like to use the Oxford spelling “categorize”? The spelling ‘categorise’ is also correct.
Context: ...new configuration types which will help categorise and group metrics. Those new metric cat...

(OXFORD_SPELLING_Z_NOT_S)


[uncategorized] ~20-~20: Loose punctuation mark.
Context: ...n add to your ABsmartly: - Conversion: Measures whether users complete a desir...

(UNLIKELY_OPENING_PUNCTUATION)


[uncategorized] ~21-~21: Loose punctuation mark.
Context: ...s complete a desired action. - Revenue: Captures direct monetary impact. - `Eng...

(UNLIKELY_OPENING_PUNCTUATION)


[uncategorized] ~22-~22: Loose punctuation mark.
Context: ...s direct monetary impact. - Engagement: Reflects how actively users interact wi...

(UNLIKELY_OPENING_PUNCTUATION)


[uncategorized] ~23-~23: Loose punctuation mark.
Context: ...interact with the product. - Retention: Shows whether users come back or contin...

(UNLIKELY_OPENING_PUNCTUATION)


[uncategorized] ~24-~24: Loose punctuation mark.
Context: ...g the product over time. - Performance: Measures speed and responsiveness, such...

(UNLIKELY_OPENING_PUNCTUATION)


[uncategorized] ~25-~25: Loose punctuation mark.
Context: ...as load time or latency. - Reliability: Tracks stability and correctness, inclu...

(UNLIKELY_OPENING_PUNCTUATION)


[uncategorized] ~26-~26: Loose punctuation mark.
Context: ..., failures, or availability. - Quality: Represents outcome quality or user expe...

(UNLIKELY_OPENING_PUNCTUATION)


[uncategorized] ~29-~29: Possible missing comma found.
Context: ...We've added some new metadata fields to metrics which will help with discoverability an...

(AI_HYDRA_LEO_MISSING_COMMA)


[uncategorized] ~35-~35: Possible missing comma found.
Context: ...t we recommend you update your existing metrics as this will improve general discoverab...

(AI_HYDRA_LEO_MISSING_COMMA)


[uncategorized] ~67-~67: Possible missing comma found.
Context: ...versioning is a critical part of metric governance as it allows for a metric to evolve ove...

(AI_HYDRA_LEO_MISSING_COMMA)


[uncategorized] ~78-~78: Possible missing comma found.
Context: ...be edited in the current version of the metric while others will require a new version...

(AI_HYDRA_LEO_MISSING_COMMA)


[uncategorized] ~83-~83: Possible missing article found.
Context: ...u will be able to edit and create new version from the new Metric view page...

(AI_HYDRA_LEO_MISSING_A)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: Redirect rules - absmartly-docs
  • GitHub Check: Header rules - absmartly-docs
  • GitHub Check: Pages changed - absmartly-docs
  • GitHub Check: Yarn Build

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
docs/platform-release-notes/2025/12.mdx (1)

1-1: Remove unused Image import.

The Image component is imported but never used in this file, which may cause MDX build/lint failures depending on your setup configuration.

-import Image from "../../../src/components/Image";
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e44187c and 3a76fd0.

📒 Files selected for processing (1)
  • docs/platform-release-notes/2025/12.mdx (1 hunks)
🧰 Additional context used
🪛 LanguageTool
docs/platform-release-notes/2025/12.mdx

[style] ~16-~16: Would you like to use the Oxford spelling “categorize”? The spelling ‘categorise’ is also correct.
Context: ...new configuration types which will help categorise and group metrics. Those new metric cat...

(OXFORD_SPELLING_Z_NOT_S)


[uncategorized] ~20-~20: Loose punctuation mark.
Context: ...n add to your ABsmartly: - Conversion: Measures whether users complete a desir...

(UNLIKELY_OPENING_PUNCTUATION)


[uncategorized] ~21-~21: Loose punctuation mark.
Context: ...s complete a desired action. - Revenue: Captures direct monetary impact. - `Eng...

(UNLIKELY_OPENING_PUNCTUATION)


[uncategorized] ~22-~22: Loose punctuation mark.
Context: ...s direct monetary impact. - Engagement: Reflects how actively users interact wi...

(UNLIKELY_OPENING_PUNCTUATION)


[uncategorized] ~23-~23: Loose punctuation mark.
Context: ...interact with the product. - Retention: Shows whether users come back or contin...

(UNLIKELY_OPENING_PUNCTUATION)


[uncategorized] ~24-~24: Loose punctuation mark.
Context: ...g the product over time. - Performance: Measures speed and responsiveness, such...

(UNLIKELY_OPENING_PUNCTUATION)


[uncategorized] ~25-~25: Loose punctuation mark.
Context: ...as load time or latency. - Reliability: Tracks stability and correctness, inclu...

(UNLIKELY_OPENING_PUNCTUATION)


[uncategorized] ~26-~26: Loose punctuation mark.
Context: ..., failures, or availability. - Quality: Represents outcome quality or user expe...

(UNLIKELY_OPENING_PUNCTUATION)


[uncategorized] ~29-~29: Possible missing comma found.
Context: ...We've added some new metadata fields to metrics which will help with discoverability an...

(AI_HYDRA_LEO_MISSING_COMMA)


[uncategorized] ~35-~35: Possible missing comma found.
Context: ...t we recommend you update your existing metrics as this will improve general discoverab...

(AI_HYDRA_LEO_MISSING_COMMA)


[uncategorized] ~69-~69: Possible missing comma found.
Context: ...versioning is a critical part of metric governance as it allows for a metric to evolve ove...

(AI_HYDRA_LEO_MISSING_COMMA)


[uncategorized] ~80-~80: Possible missing comma found.
Context: ...be edited in the current version of the metric while others will require a new version...

(AI_HYDRA_LEO_MISSING_COMMA)


[uncategorized] ~85-~85: Possible missing article found.
Context: ...u will be able to edit and create new version from the new Metric view page...

(AI_HYDRA_LEO_MISSING_A)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: Redirect rules - absmartly-docs
  • GitHub Check: Header rules - absmartly-docs
  • GitHub Check: Pages changed - absmartly-docs
  • GitHub Check: Yarn Build

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (5)
docs/platform-release-notes/2025/12.mdx (5)

45-45: Fix typo: "temmplates" → "templates".

Apply this diff:

-We've made it easier to find, understand, and select the right metrics when creating your experiments/temmplates/features.
+We've made it easier to find, understand, and select the right metrics when creating your experiments/templates/features.

60-60: Fix typo: "throught" → "through".

Apply this diff:

-Metrics can now also easily be searched by name, tags, owners, etc so you don't have to scroll throught your long list of existing metrics to find what you are looking for.
+Metrics can now also easily be searched by name, tags, owners, etc so you don't have to scroll through your long list of existing metrics to find what you are looking for.

92-95: Fix grammar errors in API caution section.

  • Line 92: Missing article "a" before "new version"
  • Line 95: Remove extra "you" - should be "need to update"

Apply this diff:

-As a metric owner, you will be able to **edit** and **create new version** from the new Metric view page.
+As a metric owner, you will be able to **edit** and **create a new version** from the new Metric view page.
 
 :::caution
-If you are using our API to edit your metrics, you will need you update your script as you will no longer be able to edit all metric fields using the edit end-point.
+If you are using our API to edit your metrics, you will need to update your script as you will no longer be able to edit all metric fields using the edit end-point.
 
 A new end-point for creating new metric versions is now available if needed.
 :::

28-32: Fix grammar and typos in metadata fields section.

Several issues need correction:

  • Line 28: Remove the possessive apostrophe
  • Line 32: Fix typo "experimemts" → "experiments", change "make sense" → "makes sense", and "metrics" → "metric"

Apply this diff:

-### New metric's metadata fields  
+### New metric metadata fields  
 We've added new metadata fields to metrics that help with discoverability and filtering across the platform. This includes:
 
 - **Unit type**: This is the list of Unit type(s) for which this metric is computed. Setting the correct Unit type(s) will help experimenters choose the right metric for their experiments. (e.g. user_id, device_id)
-- **Application**: This is the list of Application(s) where this metric make sense. For example, an `app_crashes` metrics only makes sense for experimemts running on app platforms. 
+- **Application**: This is the list of Application(s) where this metric makes sense. For example, an `app_crashes` metric only makes sense for experiments running on app platforms. 
 - **Metric category**: This is the category the metric belongs to. This will make your metric more discoverable. See above.

76-80: Fix grammar errors in versioning section.

  • Line 76: "overtime" should be two words: "over time"
  • Line 80: "change" should be "changes" for subject-verb agreement

Apply this diff:

-Metric versioning is a critical part of metric governance as it allows for a metric to evolve overtime without risking impacting previous experiments and decisions made using an older version of that metric.
+Metric versioning is a critical part of metric governance as it allows for a metric to evolve over time without risking impacting previous experiments and decisions made using an older version of that metric.
 
 ### Metric versioning 1.0
 It is now possible for metric owners to create a new version of an existing metric. 
-This can be done, for example, when the definition of a metric change.
+This can be done, for example, when the definition of a metric changes.
🧹 Nitpick comments (1)
docs/platform-release-notes/2025/12.mdx (1)

35-35: Consider adding commas for improved readability.

Whilst not strictly required, adding commas in these locations would improve clarity:

  • Line 35: After "metrics" before "as"
  • Line 76: After "governance" before "as"
  • Line 87: After "metric" before "while"

Apply this diff:

-All those fields are optional, but we recommend you update your existing metrics as this will improve general discoverability of your metrics.
+All those fields are optional, but we recommend you update your existing metrics, as this will improve general discoverability of your metrics.
-Metric versioning is a critical part of metric governance as it allows for a metric to evolve over time without risking impacting previous experiments and decisions made using an older version of that metric.
+Metric versioning is a critical part of metric governance, as it allows for a metric to evolve over time without risking impacting previous experiments and decisions made using an older version of that metric.
-With the launch of metric versioning, some fields can be edited in the current version of the metric while others will require a new version to be created.
+With the launch of metric versioning, some fields can be edited in the current version of the metric, while others will require a new version to be created.

Also applies to: 76-76, 87-87

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3a76fd0 and 0ef146c.

⛔ Files ignored due to path filters (1)
  • static/img/experiment-create/metric-selection.png is excluded by !**/*.png
📒 Files selected for processing (1)
  • docs/platform-release-notes/2025/12.mdx (1 hunks)
🧰 Additional context used
🪛 LanguageTool
docs/platform-release-notes/2025/12.mdx

[style] ~16-~16: Would you like to use the Oxford spelling “categorize”? The spelling ‘categorise’ is also correct.
Context: ...ded a new configuration type that helps categorise and group metrics. Those new metric cat...

(OXFORD_SPELLING_Z_NOT_S)


[uncategorized] ~20-~20: Loose punctuation mark.
Context: ...n add to your ABsmartly: - Conversion: Measures whether users complete a desir...

(UNLIKELY_OPENING_PUNCTUATION)


[uncategorized] ~21-~21: Loose punctuation mark.
Context: ...s complete a desired action. - Revenue: Captures direct monetary impact. - `Eng...

(UNLIKELY_OPENING_PUNCTUATION)


[uncategorized] ~22-~22: Loose punctuation mark.
Context: ...s direct monetary impact. - Engagement: Reflects how actively users interact wi...

(UNLIKELY_OPENING_PUNCTUATION)


[uncategorized] ~23-~23: Loose punctuation mark.
Context: ...interact with the product. - Retention: Shows whether users come back or contin...

(UNLIKELY_OPENING_PUNCTUATION)


[uncategorized] ~24-~24: Loose punctuation mark.
Context: ...g the product over time. - Performance: Measures speed and responsiveness, such...

(UNLIKELY_OPENING_PUNCTUATION)


[uncategorized] ~25-~25: Loose punctuation mark.
Context: ...as load time or latency. - Reliability: Tracks stability and correctness, inclu...

(UNLIKELY_OPENING_PUNCTUATION)


[uncategorized] ~26-~26: Loose punctuation mark.
Context: ..., failures, or availability. - Quality: Represents outcome quality or user expe...

(UNLIKELY_OPENING_PUNCTUATION)


[uncategorized] ~35-~35: Possible missing comma found.
Context: ...t we recommend you update your existing metrics as this will improve general discoverab...

(AI_HYDRA_LEO_MISSING_COMMA)


[uncategorized] ~76-~76: Possible missing comma found.
Context: ...versioning is a critical part of metric governance as it allows for a metric to evolve ove...

(AI_HYDRA_LEO_MISSING_COMMA)


[uncategorized] ~87-~87: Possible missing comma found.
Context: ...be edited in the current version of the metric while others will require a new version...

(AI_HYDRA_LEO_MISSING_COMMA)


[uncategorized] ~92-~92: Possible missing article found.
Context: ...u will be able to edit and create new version from the new Metric view page...

(AI_HYDRA_LEO_MISSING_A)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Yarn Build
🔇 Additional comments (1)
docs/platform-release-notes/2025/12.mdx (1)

1-1: The Image import is correctly used.

The previous review comment suggesting this import is unused is incorrect. The Image component is used on lines 47-52 for displaying the metric selection screenshot.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (2)
docs/platform-release-notes/2025/12.mdx (2)

31-33: Fix typos and grammar for professional release notes.

Multiple issues need correction on these lines:

  • Line 32: "make sense" → "makes sense" (subject-verb agreement)
  • Line 32: "metrics" → "metric" (singular)
  • Line 32: "experimemts" → "experiments" (typo)
-- **Unit type**: This is the list of Unit type(s) for which this metric is computed. Setting the correct Unit type(s) will help experimenters choose the right metric for their experiments. (e.g. user_id, device_id)
-- **Application**: This is the list of Application(s) where this metric make sense. For example, an `app_crashes` metrics only makes sense for experimemts running on app platforms. 
+- **Unit type**: This is the list of unit type(s) for which this metric is computed. Setting the correct unit type(s) will help experimenters choose the right metric for their experiments (e.g. `user_id`, `device_id`).
+- **Application**: This is the list of application(s) where this metric makes sense. For example, an `app_crashes` metric only makes sense for experiments running on app platforms. 
 - **Metric category**: This is the category the metric belongs to. This will make your metric more discoverable. See above.

102-102: Fix grammatical error in caution message.

"need you update" should be "need to update" — the infinitive marker "to" is missing.

-If you are using our API to edit your metrics, you will need you update your script as you will no longer be able to edit all metric fields using the edit end-point.
+If you are using our API to edit your metrics, you will need to update your script as you will no longer be able to edit all metric fields using the edit end-point.
🧹 Nitpick comments (5)
docs/platform-release-notes/2025/12.mdx (5)

16-16: Consider "These" for better proximity.

"Those" is grammatically correct, but "These" would read more naturally here since you're introducing the categories immediately below.

-We've added a new configuration type that helps categorise and group metrics. Those new metric categories will make it easier to find the right metrics when creating an experiment.
+We've added a new configuration type that helps categorise and group metrics. These new metric categories will make it easier to find the right metrics when creating an experiment.

28-28: Remove unnecessary possessive.

"New metric metadata fields" reads more naturally than "New metric's metadata fields" in this context.

-### New metric's metadata fields  
+### New metric metadata fields  

35-35: Add comma for clarity.

A comma after "optional" improves readability when introducing a contrasting clause.

-All those fields are optional, but we recommend you update your existing metrics as this will improve general discoverability of your metrics.
+All those fields are optional, but we recommend you update your existing metrics, as this will improve general discoverability of your metrics.

76-76: Add commas for improved readability.

Adding commas after "governance" (line 76) and "metric" (line 94) improves flow in these longer sentences.

-Metric versioning is a critical part of metric governance as it allows for a metric to evolve overtime without risking impacting previous experiments and decisions made using an older version of that metric.
+Metric versioning is a critical part of metric governance, as it allows for a metric to evolve overtime without risking impacting previous experiments and decisions made using an older version of that metric.
-With the launch of metric versioning, some fields can be edited in the current version of the metric while others will require a new version to be created.
+With the launch of metric versioning, some fields can be edited in the current version of the metric, while others will require a new version to be created.

Also applies to: 94-94


87-87: Fix verb tense and missing article.

Two minor grammar issues:

  • Line 87: "change" → "changes" (verb agreement)
  • Line 99: "create new version" → "create a new version" (missing article)
-This can be done, for example, when the definition of a metric change.
+This can be done, for example, when the definition of a metric changes.
-As a metric owner, you will be able to **edit** and **create new version** from the new Metric view page.
+As a metric owner, you will be able to **edit** and **create a new version** from the new Metric view page.

Also applies to: 99-99

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0ef146c and bfc51f6.

⛔ Files ignored due to path filters (1)
  • static/img/metric/metric-view.png is excluded by !**/*.png
📒 Files selected for processing (1)
  • docs/platform-release-notes/2025/12.mdx (1 hunks)
🧰 Additional context used
🪛 LanguageTool
docs/platform-release-notes/2025/12.mdx

[style] ~16-~16: Would you like to use the Oxford spelling “categorize”? The spelling ‘categorise’ is also correct.
Context: ...ded a new configuration type that helps categorise and group metrics. Those new metric cat...

(OXFORD_SPELLING_Z_NOT_S)


[uncategorized] ~20-~20: Loose punctuation mark.
Context: ...n add to your ABsmartly: - Conversion: Measures whether users complete a desir...

(UNLIKELY_OPENING_PUNCTUATION)


[uncategorized] ~21-~21: Loose punctuation mark.
Context: ...s complete a desired action. - Revenue: Captures direct monetary impact. - `Eng...

(UNLIKELY_OPENING_PUNCTUATION)


[uncategorized] ~22-~22: Loose punctuation mark.
Context: ...s direct monetary impact. - Engagement: Reflects how actively users interact wi...

(UNLIKELY_OPENING_PUNCTUATION)


[uncategorized] ~23-~23: Loose punctuation mark.
Context: ...interact with the product. - Retention: Shows whether users come back or contin...

(UNLIKELY_OPENING_PUNCTUATION)


[uncategorized] ~24-~24: Loose punctuation mark.
Context: ...g the product over time. - Performance: Measures speed and responsiveness, such...

(UNLIKELY_OPENING_PUNCTUATION)


[uncategorized] ~25-~25: Loose punctuation mark.
Context: ...as load time or latency. - Reliability: Tracks stability and correctness, inclu...

(UNLIKELY_OPENING_PUNCTUATION)


[uncategorized] ~26-~26: Loose punctuation mark.
Context: ..., failures, or availability. - Quality: Represents outcome quality or user expe...

(UNLIKELY_OPENING_PUNCTUATION)


[uncategorized] ~35-~35: Possible missing comma found.
Context: ...t we recommend you update your existing metrics as this will improve general discoverab...

(AI_HYDRA_LEO_MISSING_COMMA)


[uncategorized] ~76-~76: Possible missing comma found.
Context: ...versioning is a critical part of metric governance as it allows for a metric to evolve ove...

(AI_HYDRA_LEO_MISSING_COMMA)


[uncategorized] ~94-~94: Possible missing comma found.
Context: ...be edited in the current version of the metric while others will require a new version...

(AI_HYDRA_LEO_MISSING_COMMA)


[uncategorized] ~99-~99: Possible missing article found.
Context: ...u will be able to edit and create new version from the new Metric view page...

(AI_HYDRA_LEO_MISSING_A)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Yarn Build
🔇 Additional comments (2)
docs/platform-release-notes/2025/12.mdx (2)

1-1: Image import is correctly used.

The Image component is properly utilised on lines 47–52 and 78–83, so this import is necessary.


109-124: Final sections are well-written.

The "What's Next" and closing sections are clear, professional, and free of issues.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (6)
docs/platform-release-notes/2025/12.mdx (6)

87-87: Fix subject-verb agreement.

"Definition" is singular, so the verb should be "changes" not "change".

Apply this diff:

-This can be done, for example, when the definition of a metric change.
+This can be done, for example, when the definition of a metric changes.

28-28: Remove unnecessary possessive.

The heading should be "New metric metadata fields" rather than "New metric's metadata fields".

Apply this diff:

-### New metric's metadata fields  
+### New metric metadata fields

32-32: Fix subject-verb agreement and typo.

Two errors on this line:

  • "metric make sense" should be "metric makes sense"
  • "experimemts" should be "experiments"

Apply this diff:

-- **Application**: This is the list of Application(s) where this metric make sense. For example, an `app_crashes` metrics only makes sense for experimemts running on app platforms. 
+- **Application**: This is the list of Application(s) where this metric makes sense. For example, an `app_crashes` metric only makes sense for experiments running on app platforms.

99-99: Add missing article.

"Create new version" is missing the article "a".

Apply this diff:

-As a metric owner, you will be able to **edit** and **create new version** from the new Metric view page.
+As a metric owner, you will be able to **edit** and **create a new version** from the new Metric view page.

102-102: Fix duplicate word error.

"You will need you update" should be "you will need to update".

Apply this diff:

-If you are using our API to edit your metrics, you will need you update your script as you will no longer be able to edit all metric fields using the edit end-point.
+If you are using our API to edit your metrics, you will need to update your script as you will no longer be able to edit all metric fields using the edit end-point.

76-76: Use two words: "over time".

"Overtime" (one word) refers to extra working hours, whilst "over time" (two words) means gradually or as time passes.

Apply this diff:

-Metric versioning is a critical part of metric governance as it allows for a metric to evolve overtime without risking impacting previous experiments and decisions made using an older version of that metric.
+Metric versioning is a critical part of metric governance as it allows for a metric to evolve over time without risking impacting previous experiments and decisions made using an older version of that metric.
🧹 Nitpick comments (2)
docs/platform-release-notes/2025/12.mdx (2)

35-35: Add comma for clarity.

Insert a comma before "as" to improve readability when connecting two independent clauses.

Apply this diff:

-All those fields are optional, but we recommend you update your existing metrics as this will improve general discoverability of your metrics.
+All those fields are optional, but we recommend you update your existing metrics, as this will improve general discoverability of your metrics.

94-94: Add comma before "while" for clarity.

Adding a comma before "while" improves readability when contrasting two independent clauses.

Apply this diff:

-With the launch of metric versioning, some fields can be edited in the current version of the metric while others will require a new version to be created.
+With the launch of metric versioning, some fields can be edited in the current version of the metric, whilst others will require a new version to be created.
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between bfc51f6 and 3b927aa.

📒 Files selected for processing (1)
  • docs/platform-release-notes/2025/12.mdx (1 hunks)
🧰 Additional context used
🪛 LanguageTool
docs/platform-release-notes/2025/12.mdx

[style] ~16-~16: Would you like to use the Oxford spelling “categorize”? The spelling ‘categorise’ is also correct.
Context: ...ded a new configuration type that helps categorise and group metrics. Those new metric cat...

(OXFORD_SPELLING_Z_NOT_S)


[uncategorized] ~20-~20: Loose punctuation mark.
Context: ...n add to your ABsmartly: - Conversion: Measures whether users complete a desir...

(UNLIKELY_OPENING_PUNCTUATION)


[uncategorized] ~21-~21: Loose punctuation mark.
Context: ...s complete a desired action. - Revenue: Captures direct monetary impact. - `Eng...

(UNLIKELY_OPENING_PUNCTUATION)


[uncategorized] ~22-~22: Loose punctuation mark.
Context: ...s direct monetary impact. - Engagement: Reflects how actively users interact wi...

(UNLIKELY_OPENING_PUNCTUATION)


[uncategorized] ~23-~23: Loose punctuation mark.
Context: ...interact with the product. - Retention: Shows whether users come back or contin...

(UNLIKELY_OPENING_PUNCTUATION)


[uncategorized] ~24-~24: Loose punctuation mark.
Context: ...g the product over time. - Performance: Measures speed and responsiveness, such...

(UNLIKELY_OPENING_PUNCTUATION)


[uncategorized] ~25-~25: Loose punctuation mark.
Context: ...as load time or latency. - Reliability: Tracks stability and correctness, inclu...

(UNLIKELY_OPENING_PUNCTUATION)


[uncategorized] ~26-~26: Loose punctuation mark.
Context: ..., failures, or availability. - Quality: Represents outcome quality or user expe...

(UNLIKELY_OPENING_PUNCTUATION)


[uncategorized] ~35-~35: Possible missing comma found.
Context: ...t we recommend you update your existing metrics as this will improve general discoverab...

(AI_HYDRA_LEO_MISSING_COMMA)


[uncategorized] ~76-~76: Possible missing comma found.
Context: ...versioning is a critical part of metric governance as it allows for a metric to evolve ove...

(AI_HYDRA_LEO_MISSING_COMMA)


[uncategorized] ~94-~94: Possible missing comma found.
Context: ...be edited in the current version of the metric while others will require a new version...

(AI_HYDRA_LEO_MISSING_COMMA)


[uncategorized] ~99-~99: Possible missing article found.
Context: ...u will be able to edit and create new version from the new Metric view page...

(AI_HYDRA_LEO_MISSING_A)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Yarn Build

@calthejuggler calthejuggler merged commit d0a95c0 into development Dec 16, 2025
6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants