From 37af24b4ff8c878db679f4a63bce28873e5523e0 Mon Sep 17 00:00:00 2001 From: atovpeko Date: Fri, 19 Dec 2025 17:20:03 +0200 Subject: [PATCH 1/2] minor fixes --- _partials/_devops-mcp-commands.md | 2 +- about/changelog.md | 2 +- .../OLD_analyze-nft-data/nft-schema-ingestion.md | 2 +- use-timescale/metrics-logging/monitoring.md | 12 ++++++++---- 4 files changed, 11 insertions(+), 7 deletions(-) diff --git a/_partials/_devops-mcp-commands.md b/_partials/_devops-mcp-commands.md index 127987bd5f..cf40ee1595 100644 --- a/_partials/_devops-mcp-commands.md +++ b/_partials/_devops-mcp-commands.md @@ -39,6 +39,6 @@ $MCP_LONG exposes the following MCP tools to your AI Assistant: | | `pooled` | - | Use [connection pooling][Connection pooling]. This is only available if you have already enabled it for the $SERVICE_SHORT. Default: `false`. | [Connection pooling]: /use-timescale/:currentVersion:/services/connection-pooling/ -[cloud-regions]: about/:currentVersion:/supported-platforms#available-regions +[cloud-regions]: /about/:currentVersion:/supported-platforms#available-regions [create-service]: /getting-started/:currentVersion:/services/ [readreplica]: /use-timescale/:currentVersion:/ha-replicas/read-scaling/ diff --git a/about/changelog.md b/about/changelog.md index bf3f92a0c1..ec9c49be48 100644 --- a/about/changelog.md +++ b/about/changelog.md @@ -626,7 +626,7 @@ pgai vectorizer now supports automatic document vectorization. This makes it dra Instead of juggling multiple systems and syncing metadata, vectorizer handles the entire process: downloading documents from S3, parsing them, chunking text, and generating vector embeddings stored right in $PG using pgvector. As documents change, embeddings stay up-to-date automatically—keeping your $PG database the single source of truth for both structured and semantic data. -![create a vectorizer](https://assets.timescale.com/docs/images/console-create-a-vectorizer.png ) +![create a vectorizer](https://assets.timescale.com/docs/images/console-create-a-vectorizer.png) ### Sample dataset for AI testing diff --git a/tutorials/OLD_analyze-nft-data/nft-schema-ingestion.md b/tutorials/OLD_analyze-nft-data/nft-schema-ingestion.md index 414019dabd..b8ec01f2aa 100644 --- a/tutorials/OLD_analyze-nft-data/nft-schema-ingestion.md +++ b/tutorials/OLD_analyze-nft-data/nft-schema-ingestion.md @@ -347,5 +347,5 @@ SELECT count(*), MIN(time) AS min_date, MAX(time) AS max_date FROM nft_sales ``` [nft-schema]: https://github.com/timescale/nft-starter-kit/blob/master/schema.sql -[opensea-api-documentation]: https://docs.opensea.io/reference/request-an-api-key +[opensea-api-documentation]: https://docs.opensea.io/reference/api-keys [sample-data]: https://assets.timescale.com/docs/downloads/nft_sample.zip diff --git a/use-timescale/metrics-logging/monitoring.md b/use-timescale/metrics-logging/monitoring.md index 48462980ba..b357957c78 100644 --- a/use-timescale/metrics-logging/monitoring.md +++ b/use-timescale/metrics-logging/monitoring.md @@ -103,7 +103,7 @@ Insights help you get a comprehensive understanding of how your queries perform To view insights, select your $SERVICE_SHORT, then click `Monitoring` > `Insights`. Search or filter queries by type, maximum execution time, and time frame. -![Insights][insights] +![Insights][insights-image] Insights include `Metrics`, `Current lock contention`, and `Queries`. @@ -157,7 +157,7 @@ $CLOUD_LONG summarizes all [$JOBs][jobs] set up for your $SERVICE_SHORT along wi 1. To view $JOBs, select your $SERVICE_SHORT in $CONSOLE, then click `Monitoring` > `Jobs`: - ![Jobs][jobs] + ![Jobs][jobs-image] 1. Click a $JOB ID in the list to view its config and run history: @@ -175,7 +175,7 @@ $CLOUD_LONG lists current and past connections to your $SERVICE_SHORT. This incl To view connections, select your $SERVICE_SHORT in $CONSOLE, then click `Monitoring` > `Connections`. Expand the query underneath each connection to see the full SQL. -![Connections][connections] +![Connections][connections-image] Click the trash icon next to a connection in the list to terminate it. A lock icon means that a connection cannot be terminated; hover over the icon to see the reason. @@ -185,7 +185,7 @@ $CLOUD_LONG offers specific tips on configuring your $SERVICE_SHORT. This includ To view recommendations, select your $SERVICE_SHORT in $CONSOLE, then click `Monitoring` > `Recommendations`: -![Recommendations][recommendations] +![Recommendations][recommendations-image] ## Query-level statistics with `pg_stat_statements` @@ -255,5 +255,9 @@ For more examples and detailed explanations, see the [blog post on identifying p [queries-drill-down-view]: https://assets.timescale.com/docs/images/tiger-on-azure/query-drill-down-view-tiger-console.png [queries]: https://assets.timescale.com/docs/images/tiger-cloud-console/tiger-console-query-insights.png [recommendations]: /use-timescale/:currentVersion:/metrics-logging/monitoring/#recommendations +[recommendations-image]: https://assets.timescale.com/docs/images/tiger-cloud-console/recommendations-tiger-cloud.png [service-metrics]: https://assets.timescale.com/docs/images/tiger-on-azure/service-metrics-tiger-console.png [update-job-config]: https://assets.timescale.com/docs/images/tiger-cloud-console/tiger-console-edit-job.png +[insights-image]: https://assets.timescale.com/docs/images/tiger-on-azure/insights-overview-tiger-console.png +[jobs-image]: https://assets.timescale.com/docs/images/tiger-on-azure/tiger-console-jobs.png +[connections-image]: https://assets.timescale.com/docs/images/tiger-on-azure/tiger-console-service-connections.png From 84a40212a1b0ea6d758a1b4bc302034b0c0bf7f6 Mon Sep 17 00:00:00 2001 From: atovpeko Date: Fri, 19 Dec 2025 17:43:05 +0200 Subject: [PATCH 2/2] minor fixes --- mst/migrate-to-mst.md | 2 +- use-timescale/extensions/postgis.md | 2 +- use-timescale/hypertables/improve-query-performance.md | 3 ++- 3 files changed, 4 insertions(+), 3 deletions(-) diff --git a/mst/migrate-to-mst.md b/mst/migrate-to-mst.md index 9b8269d4ce..80058036dd 100644 --- a/mst/migrate-to-mst.md +++ b/mst/migrate-to-mst.md @@ -108,7 +108,7 @@ machine: ``` 1. Connect to your new database and update your table statistics by running - [`ANALYZE`] [analyze] on your entire dataset: + [`ANALYZE`][analyze] on your entire dataset: ```sql psql -d "$TARGET" defaultdb=> ANALYZE; diff --git a/use-timescale/extensions/postgis.md b/use-timescale/extensions/postgis.md index ad668c51e2..ebbb1b8f30 100644 --- a/use-timescale/extensions/postgis.md +++ b/use-timescale/extensions/postgis.md @@ -15,7 +15,7 @@ geographic data. It helps in spatial data analysis, the study of patterns, anomalies, and theories within spatial or geographical data. For more information about these functions and the options available, see the -[PostGIS documentation] [postgis-docs]. +[PostGIS documentation][postgis-docs]. ## Use the `postgis` extension to analyze geospatial data diff --git a/use-timescale/hypertables/improve-query-performance.md b/use-timescale/hypertables/improve-query-performance.md index 41affbe304..4e6555e905 100644 --- a/use-timescale/hypertables/improve-query-performance.md +++ b/use-timescale/hypertables/improve-query-performance.md @@ -123,7 +123,7 @@ column in each chunk. These ranges are stored in the start (inclusive) and end ( catalog table. TimescaleDB uses these ranges for dynamic chunk exclusion when the `WHERE` clause of an SQL query specifies ranges on the column. -![Chunk skipping][chunk-skipping] +![Chunk skipping][chunk-skipping-image] You can enable chunk skipping on hypertables compressed into the columnstore for `smallint`, `int`, `bigint`, `serial`, `bigserial`, `date`, `timestamp`, or `timestamptz` type columns. @@ -167,3 +167,4 @@ $PG planner to create the best query plan. For more information about the [chunks_detailed_size]: /api/:currentVersion:/hypertable/chunks_detailed_size [our-blog-post]: https://www.tigerdata.com/blog/boost-postgres-performance-by-7x-with-chunk-skipping-indexes [pg-analyze]: https://www.postgresql.org/docs/current/sql-analyze.html +[chunk-skipping-image]: https://assets.timescale.com/docs/images/hypertable-with-chunk-skipping.png \ No newline at end of file