diff --git a/docs/concepts/benchmarking.md b/docs/concepts/benchmarking.md
index 91c694a3..2f7fb80c 100644
--- a/docs/concepts/benchmarking.md
+++ b/docs/concepts/benchmarking.md
@@ -9,11 +9,11 @@ Collections of tasks can be published as _benchmarking suites_. Seamlessly integ
- standardized train-test splits are provided to ensure that results can be objectively compared - results can be shared in a reproducible way through the APIs
- results from other users can be easily downloaded and reused
-You can search for all existing benchmarking suites or create your own. For all further details, see the [benchmarking guide](../benchmark/benchmark.md).
+You can search for all existing benchmarking suites or create your own. For all further details, see the [benchmarking guide](../benchmark/index.md).
-
+
## Benchmark studies
Collections of runs can be published as _benchmarking studies_. They contain the results of all runs (possibly millions) executed on a specific benchmarking suite. OpenML allows you to easily download all such results at once via the APIs, but also visualized them online in the Analysis tab (next to the complete list of included tasks and runs). Below is an example of a benchmark study for AutoML algorithms.
-
\ No newline at end of file
+