From 7456a5989bc35aed905b371a8b1f5c5314e41976 Mon Sep 17 00:00:00 2001
From: Abdelrhman Eldesoky <105232562+Desoky231@users.noreply.github.com>
Date: Tue, 18 Nov 2025 07:43:30 +0200
Subject: [PATCH] Update benchmarking.md
- Update two image links
- Update Benchmark/index.md
---
docs/concepts/benchmarking.md | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/docs/concepts/benchmarking.md b/docs/concepts/benchmarking.md
index 91c694a3..2f7fb80c 100644
--- a/docs/concepts/benchmarking.md
+++ b/docs/concepts/benchmarking.md
@@ -9,11 +9,11 @@ Collections of tasks can be published as _benchmarking suites_. Seamlessly integ
- standardized train-test splits are provided to ensure that results can be objectively compared - results can be shared in a reproducible way through the APIs
- results from other users can be easily downloaded and reused
-You can search for all existing benchmarking suites or create your own. For all further details, see the [benchmarking guide](../benchmark/benchmark.md).
+You can search for all existing benchmarking suites or create your own. For all further details, see the [benchmarking guide](../benchmark/index.md).
-
+
## Benchmark studies
Collections of runs can be published as _benchmarking studies_. They contain the results of all runs (possibly millions) executed on a specific benchmarking suite. OpenML allows you to easily download all such results at once via the APIs, but also visualized them online in the Analysis tab (next to the complete list of included tasks and runs). Below is an example of a benchmark study for AutoML algorithms.
-
\ No newline at end of file
+