diff --git a/python/notebooks/grounding/vectorsearch2_travel_agent.ipynb b/python/notebooks/grounding/vectorsearch2_travel_agent.ipynb
new file mode 100644
index 000000000..22da89952
--- /dev/null
+++ b/python/notebooks/grounding/vectorsearch2_travel_agent.ipynb
@@ -0,0 +1,1389 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "vyLtx08mbcAF",
+ "metadata": {
+ "id": "vyLtx08mbcAF"
+ },
+ "source": [
+ "# End-to-End Travel Agent: Vector Search 2.0 + ADK\n",
+ "\n",
+ "This notebook demonstrates the complete workflow for building a **Gen AI Travel Agent** using **[Vertex AI Vector Search 2.0](https://cloud.google.com/vertex-ai/docs/vector-search-2/overview)** and the **[Agent Development Kit (ADK)](https://google.github.io/adk-docs/)**.\n",
+ "\n",
+ "## What is Vector Search 2.0?\n",
+ "\n",
+ "Vector Search 2.0 is Google Cloud's fully managed, self-tuning vector database built on Google's [ScaNN (Scalable Nearest Neighbors)](https://github.com/google-research/google-research/tree/master/scann) algorithm - the same technology powering Google Search, YouTube, and Google Play.\n",
+ "\n",
+ "### Key Features\n",
+ "\n",
+ "| Feature | Description |\n",
+ "|---------|-------------|\n",
+ "| **Zero Indexing to Billion-Scale** | Start immediately with kNN (no indexing), scale to billions with ANN indexes |\n",
+ "| **Unified Data Storage** | Store vectors and metadata together (no separate database needed) |\n",
+ "| **Auto-Embeddings** | Automatic embedding generation using Vertex AI models like `gemini-embedding-001` |\n",
+ "| **Built-in Full Text Search** | No need to generate sparse embeddings yourself |\n",
+ "| **Hybrid Search** | Combine semantic + keyword search with [RRF ranking](https://plg.uwaterloo.ca/~gvcormac/cormacksigir09-rrf.pdf) |\n",
+ "| **Self-Tuning** | Auto-optimized performance without manual configuration |\n",
+ "\n",
+ "### Core Architecture\n",
+ "\n",
+ "Vector Search 2.0 has three main components:\n",
+ "\n",
+ "1. **[Collections](https://cloud.google.com/vertex-ai/docs/vector-search-2/collections/collections)**: Schema-enforced containers for your data\n",
+ "2. **[Data Objects](https://cloud.google.com/vertex-ai/docs/vector-search-2/data-objects/data-objects)**: Individual items with data fields and vector embeddings\n",
+ "3. **[Indexes](https://cloud.google.com/vertex-ai/docs/vector-search-2/indexes/indexes)**: kNN (instant, dev) or ANN (fast, production-scale)\n",
+ "\n",
+ "## What We'll Build\n",
+ "\n",
+ "In this notebook, we'll:\n",
+ "\n",
+ "1. **Build the Knowledge Core**: Ingest real Airbnb data into Vector Search 2.0, using auto-embeddings and metadata storage\n",
+ "2. **Build the Agent**: Use ADK to create an agent that reasons about user intent\n",
+ "3. **Connect Them**: Wrap Vector Search 2.0 as a **Tool** that the Agent calls autonomously to find rentals\n",
+ "\n",
+ "**New to Vector Search 2.0?** For a comprehensive introduction, see: [Introduction to Vertex AI Vector Search 2.0](https://github.com/GoogleCloudPlatform/generative-ai/blob/main/embeddings/vector-search-2-intro.ipynb)\n",
+ "\n",
+ "
\n",
+ " \n",
+ " \n",
+ "  Run in Colab\n",
+ " \n",
+ " | \n",
+ " \n",
+ " \n",
+ "  View on GitHub\n",
+ " \n",
+ " | \n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "ijwb597iqns",
+ "metadata": {},
+ "source": [
+ "---\n",
+ "\n",
+ "## Prerequisites\n",
+ "\n",
+ "Before running this notebook, ensure you have:\n",
+ "\n",
+ "1. A Google Cloud project with billing enabled ([setup guide](https://cloud.google.com/vertex-ai/docs/start/cloud-environment))\n",
+ "2. The [Security Admin](https://cloud.google.com/iam/docs/roles-permissions/iam#iam.securityAdmin) (`roles/iam.securityAdmin`) IAM role on your project\n",
+ "\n",
+ "### Important: Resource Cleanup\n",
+ "\n",
+ "Vector Search 2.0 resources incur costs when active. **Make sure to run the cleanup section at the end** of this tutorial to delete all Collections and avoid unexpected charges.\n",
+ "\n",
+ "---\n",
+ "\n",
+ "# Part 1: Setup and Installation"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "uixecbc3xk9",
+ "metadata": {},
+ "source": [
+ "## Install Required Packages\n",
+ "\n",
+ "First, we'll install the necessary Python libraries:\n",
+ "\n",
+ "- **google-cloud-vectorsearch**: The Vector Search 2.0 SDK for creating collections and searching\n",
+ "- **google-adk**: Agent Development Kit for building AI agents\n",
+ "- **pandas, requests**: Utilities for downloading and processing our Airbnb dataset\n",
+ "\n",
+ "If running in Colab, this will also authenticate you and restart the runtime."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "nJ3vNMOYbcAG",
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 1000
+ },
+ "id": "nJ3vNMOYbcAG",
+ "outputId": "8e467561-2300-477e-b572-c751e7d4df53"
+ },
+ "outputs": [],
+ "source": [
+ "!pip install --upgrade --quiet google-cloud-vectorsearch pandas requests google-adk\n",
+ "\n",
+ "import sys\n",
+ "if \"google.colab\" in sys.modules:\n",
+ " # Authenticate to Google Cloud (required for Colab)\n",
+ " from google.colab import auth\n",
+ " auth.authenticate_user()\n",
+ " \n",
+ " # Restart runtime to pick up new packages\n",
+ " import IPython\n",
+ " app = IPython.Application.instance()\n",
+ " app.kernel.do_shutdown(True)\n",
+ "\n",
+ "print(\"Libraries installed. Runtime restarted if on Colab.\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "na6fmribvh",
+ "metadata": {},
+ "source": [
+ "## Configure Project and Initialize SDK Clients\n",
+ "\n",
+ "Here we set up the core configuration for our project:\n",
+ "\n",
+ "1. Set your Google Cloud project ID and location\n",
+ "2. Configure environment variables for ADK/Gemini\n",
+ "3. Initialize the three Vector Search 2.0 SDK clients:\n",
+ " - **admin_client**: For managing Collections and Indexes\n",
+ " - **data_client**: For creating/updating/deleting Data Objects\n",
+ " - **search_client**: For performing search queries\n",
+ "\n",
+ "> **Important**: Replace `\"your-project-id\"` with your actual Google Cloud project ID."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "77PF7QrObcAH",
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "77PF7QrObcAH",
+ "outputId": "4708a9c6-639f-411e-cf94-91bc6df685d0"
+ },
+ "outputs": [],
+ "source": [
+ "import os\n",
+ "from google.cloud import vectorsearch_v1beta\n",
+ "\n",
+ "# --- PROJECT SETTINGS ---\n",
+ "# Replace with your Google Cloud project ID\n",
+ "PROJECT_ID = \"your-project-id\" # @param {type:\"string\"}\n",
+ "LOCATION = \"us-central1\" # @param {type:\"string\"}\n",
+ "COLLECTION_ID = \"london-travel-agent-demo\" # Unique name for our collection\n",
+ "\n",
+ "# Validate PROJECT_ID\n",
+ "if PROJECT_ID == \"your-project-id\" or not PROJECT_ID:\n",
+ " raise ValueError(\"Please set PROJECT_ID to your actual Google Cloud project ID\")\n",
+ "\n",
+ "# ADK / Gemini Configuration\n",
+ "os.environ[\"GOOGLE_CLOUD_PROJECT\"] = PROJECT_ID\n",
+ "os.environ[\"GOOGLE_CLOUD_LOCATION\"] = LOCATION\n",
+ "os.environ[\"GOOGLE_GENAI_USE_VERTEXAI\"] = \"True\"\n",
+ "\n",
+ "# --- SDK CLIENTS ---\n",
+ "# Vector Search 2.0 uses a modular client architecture with three specialized clients:\n",
+ "\n",
+ "# 1. VectorSearchServiceClient: Manages Collections and Indexes (CRUD operations)\n",
+ "admin_client = vectorsearch_v1beta.VectorSearchServiceClient()\n",
+ "\n",
+ "# 2. DataObjectServiceClient: Manages Data Objects (create, update, delete)\n",
+ "data_client = vectorsearch_v1beta.DataObjectServiceClient()\n",
+ "\n",
+ "# 3. DataObjectSearchServiceClient: Performs search and query operations\n",
+ "search_client = vectorsearch_v1beta.DataObjectSearchServiceClient()\n",
+ "\n",
+ "# Resource paths\n",
+ "parent = f\"projects/{PROJECT_ID}/locations/{LOCATION}\"\n",
+ "collection_path = f\"{parent}/collections/{COLLECTION_ID}\"\n",
+ "\n",
+ "print(f\"Project: {PROJECT_ID}\")\n",
+ "print(f\"Location: {LOCATION}\")\n",
+ "print(f\"Collection: {COLLECTION_ID}\")\n",
+ "print(f\"\\nSDK clients initialized successfully.\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "4m68kk6apw6",
+ "metadata": {},
+ "source": [
+ "## Enable Required Google Cloud APIs\n",
+ "\n",
+ "Before using Vector Search 2.0, we need to enable two APIs in your project:\n",
+ "\n",
+ "- **vectorsearch.googleapis.com**: The Vector Search 2.0 API itself\n",
+ "- **aiplatform.googleapis.com**: Required for auto-embeddings with Vertex AI models like `gemini-embedding-001`\n",
+ "\n",
+ "This command is idempotent - it's safe to run even if the APIs are already enabled."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "z7d5beczxh",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "!gcloud services enable vectorsearch.googleapis.com aiplatform.googleapis.com --project \"{PROJECT_ID}\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "gFp24P9YbcAH",
+ "metadata": {
+ "id": "gFp24P9YbcAH"
+ },
+ "source": [
+ "---\n",
+ "\n",
+ "# Part 2: Data Pipeline\n",
+ "\n",
+ "## Download and Clean the Airbnb Dataset\n",
+ "\n",
+ "We'll use real data from [Inside Airbnb](http://insideairbnb.com/) - a project that provides data about Airbnb listings in cities worldwide. Our dataset contains London vacation rentals with:\n",
+ "\n",
+ "| Field | Type | Description |\n",
+ "|-------|------|-------------|\n",
+ "| `id` | string | Unique listing ID |\n",
+ "| `name` | string | Listing title |\n",
+ "| `description` | string | Full listing description |\n",
+ "| `price` | number | Price per night in GBP |\n",
+ "| `neighborhood` | string | London neighborhood (e.g., \"Hackney\", \"Islington\") |\n",
+ "| `listing_url` | string | Airbnb URL |\n",
+ "| `instant_bookable` | string | \"t\" if instantly bookable, \"f\" otherwise |\n",
+ "| `neighborhood_overview` | string | Description of the area |\n",
+ "\n",
+ "The cleaning process includes:\n",
+ "\n",
+ "- Selecting relevant columns (name, description, price, neighborhood, etc.)\n",
+ "- Converting price from `\"$1,234.00\"` format to numeric\n",
+ "- Filling missing values to avoid API errors\n",
+ "- Limiting to 2,000 listings for demo speed (scales to 90K+ in production)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "w3ceIO2xbcAH",
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "w3ceIO2xbcAH",
+ "outputId": "acb4fe79-05f3-4f13-b077-449c9389c8f4"
+ },
+ "outputs": [],
+ "source": [
+ "import pandas as pd\n",
+ "import requests\n",
+ "import io\n",
+ "\n",
+ "# Source: Inside Airbnb (London, Sept 2025)\n",
+ "DATA_URL = \"https://data.insideairbnb.com/united-kingdom/england/london/2025-09-14/data/listings.csv.gz\"\n",
+ "\n",
+ "print(\"Downloading dataset...\")\n",
+ "headers = {\"User-Agent\": \"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7)\"}\n",
+ "response = requests.get(DATA_URL, headers=headers)\n",
+ "response.raise_for_status()\n",
+ "\n",
+ "# Load GZIP directly\n",
+ "df = pd.read_csv(io.BytesIO(response.content), compression='gzip')\n",
+ "\n",
+ "# --- CLEANING ---\n",
+ "cols = ['id', 'name', 'description', 'price', 'neighborhood_overview', 'listing_url', 'instant_bookable', 'neighbourhood_cleansed']\n",
+ "df = df[cols].copy()\n",
+ "\n",
+ "# Clean Price (remove $ and ,) and convert to float\n",
+ "df['price'] = df['price'].astype(str).str.replace(r'[$,]', '', regex=True)\n",
+ "df['price'] = pd.to_numeric(df['price'], errors='coerce').fillna(0.0)\n",
+ "\n",
+ "# Fill NaNs in text fields (Critical to avoid API errors)\n",
+ "str_cols = ['name', 'neighbourhood_cleansed', 'instant_bookable', 'listing_url', 'description', 'neighborhood_overview']\n",
+ "for col in str_cols:\n",
+ " df[col] = df[col].fillna(\"\").astype(str)\n",
+ "\n",
+ "# Normalize boolean string\n",
+ "df['instant_bookable'] = df['instant_bookable'].str.lower()\n",
+ "\n",
+ "# Subset for demo speed\n",
+ "df_demo = df.head(2000).reset_index(drop=True)\n",
+ "print(f\"Loaded & Cleaned {len(df_demo)} listings.\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "Y7-CEEN_bcAI",
+ "metadata": {
+ "id": "Y7-CEEN_bcAI"
+ },
+ "source": [
+ "---\n",
+ "\n",
+ "# Part 3: Create Collection\n",
+ "\n",
+ "## Create a Vector Search 2.0 Collection\n",
+ "\n",
+ "A **Collection** is a schema-enforced container for your data in Vector Search 2.0. Think of it as a table in a traditional database, but optimized for vector operations.\n",
+ "\n",
+ "### Collection Schemas\n",
+ "\n",
+ "Each Collection has two schemas:\n",
+ "\n",
+ "1. **[Data Schema](https://cloud.google.com/vertex-ai/docs/vector-search-2/collections/collections#data-schema)**: Defines the structure of your data fields using [JSON Schema](https://json-schema.org/) format. All Data Objects must conform to this schema.\n",
+ "\n",
+ "2. **[Vector Schema](https://cloud.google.com/vertex-ai/docs/vector-search-2/collections/collections#vector-schema)**: Defines your embedding fields with their dimensions and configurations. You can have multiple vector fields per object (e.g., text_embedding, image_embedding).\n",
+ "\n",
+ "### Auto-Embeddings Feature\n",
+ "\n",
+ "One of Vector Search 2.0's most powerful features is **automatic embedding generation**. When you configure `vertex_embedding_config` in your vector schema, the service automatically generates embeddings using Vertex AI models. This means you don't need to:\n",
+ "\n",
+ "- Manage embedding model infrastructure\n",
+ "- Pre-compute embeddings before ingestion \n",
+ "- Handle embedding API calls yourself\n",
+ "\n",
+ "We use a `text_template` to combine `description` + `neighborhood_overview` for richer semantic embeddings."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "dLIGZDvwbcAI",
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "dLIGZDvwbcAI",
+ "outputId": "f5cc9412-ad6a-4013-c52e-65ae45a58855"
+ },
+ "outputs": [],
+ "source": [
+ "# Define the Collection schema\n",
+ "collection_config = {\n",
+ " # DATA SCHEMA: Defines the structure of your data fields\n",
+ " # All fields in Data Objects must match these types\n",
+ " \"data_schema\": {\n",
+ " \"type\": \"object\",\n",
+ " \"properties\": {\n",
+ " \"name\": {\"type\": \"string\"}, # Listing title\n",
+ " \"price\": {\"type\": \"number\"}, # Price per night (GBP)\n",
+ " \"neighborhood\": {\"type\": \"string\"}, # London neighborhood\n",
+ " \"listing_url\": {\"type\": \"string\"}, # Airbnb URL\n",
+ " \"instant_bookable\": {\"type\": \"string\"}, # \"t\" or \"f\"\n",
+ " \"description\": {\"type\": \"string\"}, # Full listing description\n",
+ " \"neighborhood_overview\": {\"type\": \"string\"} # Area description\n",
+ " }\n",
+ " },\n",
+ " \n",
+ " # VECTOR SCHEMA: Defines embedding fields and their configurations\n",
+ " \"vector_schema\": {\n",
+ " \"description_embedding\": {\n",
+ " \"dense_vector\": {\n",
+ " # Embedding dimensions (768 for gemini-embedding-001)\n",
+ " \"dimensions\": 768,\n",
+ " \n",
+ " # AUTO-EMBEDDING CONFIGURATION\n",
+ " # Vector Search 2.0 will automatically generate embeddings\n",
+ " # using the specified Vertex AI model\n",
+ " \"vertex_embedding_config\": {\n",
+ " \"model_id\": \"gemini-embedding-001\",\n",
+ " \n",
+ " # text_template: Combines multiple fields into embedding input\n",
+ " # This creates richer semantic embeddings by including both\n",
+ " # the description AND neighborhood context\n",
+ " \"text_template\": \"Description: {description}. Neighborhood: {neighborhood_overview}.\",\n",
+ " \n",
+ " # task_type: Optimizes embeddings for retrieval use cases\n",
+ " # Use RETRIEVAL_DOCUMENT for documents being indexed\n",
+ " \"task_type\": \"RETRIEVAL_DOCUMENT\"\n",
+ " }\n",
+ " }\n",
+ " }\n",
+ " }\n",
+ "}\n",
+ "\n",
+ "# Create the Collection (or skip if it already exists)\n",
+ "try:\n",
+ " existing = admin_client.get_collection(name=collection_path)\n",
+ " print(f\"Collection '{COLLECTION_ID}' already exists.\")\n",
+ "except Exception:\n",
+ " print(f\"Creating Collection '{COLLECTION_ID}'...\")\n",
+ " request = vectorsearch_v1beta.CreateCollectionRequest(\n",
+ " parent=parent,\n",
+ " collection_id=COLLECTION_ID,\n",
+ " collection=collection_config\n",
+ " )\n",
+ " operation = admin_client.create_collection(request=request)\n",
+ " operation.result() # Wait for completion\n",
+ " print(f\"Collection '{COLLECTION_ID}' created successfully!\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "g6dwq1LWbcAI",
+ "metadata": {
+ "id": "g6dwq1LWbcAI"
+ },
+ "source": [
+ "---\n",
+ "\n",
+ "# Part 4: Ingest Data Objects\n",
+ "\n",
+ "## Batch Ingest Data Objects with Auto-Embeddings\n",
+ "\n",
+ "A **Data Object** represents a single item in your Collection. Each Data Object consists of:\n",
+ "\n",
+ "1. **data_object_id**: Unique identifier for the object\n",
+ "2. **data**: Data fields (matching the data_schema)\n",
+ "3. **vectors**: Embedding vectors (matching the vector_schema, or empty for auto-generation)\n",
+ "\n",
+ "### Batch Ingestion\n",
+ "\n",
+ "For efficient data loading, Vector Search 2.0 supports batch operations:\n",
+ "\n",
+ "- **BatchCreateDataObjectsRequest**: Add up to 250 objects per request\n",
+ "- **Auto-embeddings**: Pass `vectors: {}` to trigger automatic embedding generation\n",
+ "\n",
+ "### Batch Size Limits\n",
+ "\n",
+ "When using auto-embeddings, batch size is limited by the embedding model's \"max texts per request\":\n",
+ "- `gemini-embedding-001`: 250 texts per request\n",
+ "- Other models may have different limits\n",
+ "\n",
+ "We add a small delay between batches to respect API quotas. This step may take a few minutes as embeddings are generated for each listing."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "VigRUOUKbcAI",
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 84,
+ "referenced_widgets": [
+ "729867ba038a4f2ab2d7e323623e95ef",
+ "fc828f9e81b14681aa2c384363d35ba0",
+ "2361eea831a74613a6b6f52c6d898a48",
+ "f4b85d92baaa4ef1bb4c230ecdc76f92",
+ "3798006bedeb4618a307c0f109b721ef",
+ "95404dacf88546929a3c7d43445387c2",
+ "7894bc430f4a47a0a80839e7493907fb",
+ "db5c36e6ea6d4449be60ecf4a6639651",
+ "5298691c16c34017b0659b60d741186e",
+ "2241c908443f4890a6037e08844fbc8f",
+ "9a471edce9c2470ba216420675277f51"
+ ]
+ },
+ "id": "VigRUOUKbcAI",
+ "outputId": "0e0d3528-a092-4f93-83ff-b38c5b3cfebf"
+ },
+ "outputs": [],
+ "source": [
+ "from tqdm.auto import tqdm\n",
+ "import time\n",
+ "\n",
+ "print(f\"Ingesting {len(df_demo)} listings into '{COLLECTION_ID}'...\")\n",
+ "\n",
+ "# Batch size: Max 250 for gemini-embedding-001 auto-embeddings\n",
+ "BATCH_SIZE = 100 # Using 100 for safety margin\n",
+ "\n",
+ "# Prepare Data Objects\n",
+ "# Each object needs: data_object_id + data (matching schema) + vectors (empty for auto-embedding)\n",
+ "data_objects = []\n",
+ "for _, row in df_demo.iterrows():\n",
+ " data_objects.append({\n",
+ " \"data_object_id\": str(row['id']), # Unique ID (must be string)\n",
+ " \"data_object\": {\n",
+ " \"data\": {\n",
+ " \"name\": row['name'],\n",
+ " \"price\": float(row['price']), # Ensure numeric type\n",
+ " \"neighborhood\": row['neighbourhood_cleansed'],\n",
+ " \"instant_bookable\": row['instant_bookable'],\n",
+ " \"listing_url\": row['listing_url'],\n",
+ " \"description\": row['description'],\n",
+ " \"neighborhood_overview\": row['neighborhood_overview']\n",
+ " },\n",
+ " # Empty vectors = trigger auto-embedding generation\n",
+ " # Vector Search 2.0 will use the vertex_embedding_config from our schema\n",
+ " \"vectors\": {}\n",
+ " }\n",
+ " })\n",
+ "\n",
+ "# Batch Upload with Progress Bar\n",
+ "for i in tqdm(range(0, len(data_objects), BATCH_SIZE), desc=\"Uploading batches\"):\n",
+ " batch = data_objects[i:i + BATCH_SIZE]\n",
+ " \n",
+ " try:\n",
+ " request = vectorsearch_v1beta.BatchCreateDataObjectsRequest(\n",
+ " parent=collection_path,\n",
+ " requests=batch\n",
+ " )\n",
+ " data_client.batch_create_data_objects(request)\n",
+ " \n",
+ " # Rate limiting: Pause between batches to respect embedding API quotas\n",
+ " time.sleep(2)\n",
+ " \n",
+ " except Exception as e:\n",
+ " # Skip \"already exists\" errors (useful for re-runs)\n",
+ " if \"already exists\" not in str(e).lower():\n",
+ " tqdm.write(f\"Batch error: {str(e)[:80]}\")\n",
+ "\n",
+ "print(f\"\\nIngestion complete! {len(data_objects)} listings loaded.\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "ngHsRPlfbcAJ",
+ "metadata": {
+ "id": "ngHsRPlfbcAJ"
+ },
+ "source": [
+ "---\n",
+ "\n",
+ "# Part 5: Vector Search Tool\n",
+ "\n",
+ "## Define the Vector Search Tool\n",
+ "\n",
+ "This function will be used as a \"tool\" by our ADK agent. It performs **Hybrid Search**, which combines:\n",
+ "\n",
+ "1. **Semantic Search**: Uses auto-generated query embeddings to find semantically similar listings\n",
+ "2. **Text Search**: Matches exact keywords across name, description, and neighborhood fields\n",
+ "3. **RRF Ranking**: Combines results using Reciprocal Rank Fusion (60% semantic, 40% keyword)\n",
+ "\n",
+ "### Search Types in Vector Search 2.0\n",
+ "\n",
+ "| Search Type | Description | Use Case |\n",
+ "|-------------|-------------|----------|\n",
+ "| **[Semantic Search](https://cloud.google.com/vertex-ai/docs/vector-search-2/query-search/search#semantic-search)** | Natural language queries with auto-generated embeddings | \"Find cozy artist lofts\" |\n",
+ "| **[Text Search](https://cloud.google.com/vertex-ai/docs/vector-search-2/query-search/search#text-search)** | Traditional keyword matching | \"garden flat\" (exact match) |\n",
+ "| **[Hybrid Search](https://cloud.google.com/vertex-ai/docs/vector-search-2/query-search/search#hybrid-search)** | Combine semantic + keyword with RRF ranking | Best of both worlds |\n",
+ "| **[Vector Search](https://cloud.google.com/vertex-ai/docs/vector-search-2/query-search/search#vector-search)** | Provide your own query vector | Custom embeddings |\n",
+ "\n",
+ "### Filter Syntax\n",
+ "\n",
+ "Vector Search 2.0 supports [rich query operators](https://cloud.google.com/vertex-ai/docs/vector-search-2/query-search/query#filter-syntax) for filtering:\n",
+ "\n",
+ "**Comparison**: `$eq`, `$ne`, `$gt`, `$gte`, `$lt`, `$lte` \n",
+ "**Logical**: `$and`, `$or` \n",
+ "**Array**: `$in`, `$nin`, `$all`\n",
+ "\n",
+ "**Filter Examples**:\n",
+ "\n",
+ "```json\n",
+ "// Price under 200\n",
+ "{\"price\": {\"$lt\": 200.0}}\n",
+ "\n",
+ "// Specific neighborhood\n",
+ "{\"neighborhood\": {\"$eq\": \"Hackney\"}}\n",
+ "\n",
+ "// Combined: Hackney + under 200 + instant bookable\n",
+ "{\"$and\": [\n",
+ " {\"neighborhood\": {\"$eq\": \"Hackney\"}},\n",
+ " {\"price\": {\"$lt\": 200.0}},\n",
+ " {\"instant_bookable\": {\"$eq\": \"t\"}}\n",
+ "]}\n",
+ "```"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "YWGGXnzebcAJ",
+ "metadata": {
+ "id": "YWGGXnzebcAJ"
+ },
+ "outputs": [],
+ "source": [
+ "import json\n",
+ "from typing import Dict, List, Any\n",
+ "from google.cloud import vectorsearch_v1beta\n",
+ "\n",
+ "def find_rentals(query: str, filter: str = \"\") -> List[Dict[str, Any]]:\n",
+ " \"\"\"\n",
+ " Search for vacation rentals using Hybrid Search (Semantic + Keyword) with metadata filtering.\n",
+ " \n",
+ " This function demonstrates Vector Search 2.0's hybrid search capability, combining:\n",
+ " 1. Semantic Search: Understands query intent (e.g., \"cozy\" finds warm, inviting spaces)\n",
+ " 2. Text Search: Matches exact keywords (e.g., \"garden\" finds listings with gardens)\n",
+ " 3. RRF Ranking: Merges results using Reciprocal Rank Fusion for balanced relevance\n",
+ " \n",
+ " Args:\n",
+ " query: Natural language description of desired rental (e.g., \"artist loft with garden\")\n",
+ " filter: JSON string with metadata filters (e.g., '{\"price\": {\"$lt\": 200}}')\n",
+ " \n",
+ " Returns:\n",
+ " List of matching rentals with name, price, neighborhood, and URL\n",
+ " \"\"\"\n",
+ " print(f\"\\n>>> TOOL CALL: find_rentals (Hybrid Search)\")\n",
+ " print(f\" Query: {query}\")\n",
+ " print(f\" Filter: {filter if filter else 'None'}\")\n",
+ "\n",
+ " # Parse Filter JSON (if provided)\n",
+ " filter_dict = None\n",
+ " if filter.strip():\n",
+ " try:\n",
+ " filter_dict = json.loads(filter)\n",
+ " except json.JSONDecodeError:\n",
+ " print(\" Warning: Invalid JSON filter, ignoring.\")\n",
+ "\n",
+ " try:\n",
+ " # Configure Semantic Search\n",
+ " # Uses auto-generated embeddings with QUESTION_ANSWERING task type\n",
+ " # (pairs with RETRIEVAL_DOCUMENT used during indexing)\n",
+ " semantic_search = vectorsearch_v1beta.SemanticSearch(\n",
+ " search_text=query,\n",
+ " search_field=\"description_embedding\", # The vector field to search\n",
+ " filter=filter_dict, # Metadata filtering supported\n",
+ " task_type=\"QUESTION_ANSWERING\", # Optimized for query-document matching\n",
+ " top_k=10,\n",
+ " output_fields=vectorsearch_v1beta.OutputFields(\n",
+ " data_fields=[\"name\", \"price\", \"neighborhood\", \"listing_url\"]\n",
+ " )\n",
+ " )\n",
+ "\n",
+ " # Configure Text Search (Keyword Matching)\n",
+ " # Searches across multiple text fields for exact keyword matches\n",
+ " text_search = vectorsearch_v1beta.TextSearch(\n",
+ " search_text=query,\n",
+ " data_field_names=[\"name\", \"description\", \"neighborhood_overview\"],\n",
+ " top_k=10,\n",
+ " output_fields=vectorsearch_v1beta.OutputFields(\n",
+ " data_fields=[\"name\", \"price\", \"neighborhood\", \"listing_url\"]\n",
+ " )\n",
+ " )\n",
+ "\n",
+ " # Execute Hybrid Search with RRF Ranking\n",
+ " # BatchSearchDataObjectsRequest combines multiple searches\n",
+ " # RRF (Reciprocal Rank Fusion) merges results based on position in each list\n",
+ " # weights=[0.6, 0.4] gives slightly more importance to semantic search\n",
+ " request = vectorsearch_v1beta.BatchSearchDataObjectsRequest(\n",
+ " parent=collection_path,\n",
+ " searches=[\n",
+ " vectorsearch_v1beta.Search(semantic_search=semantic_search),\n",
+ " vectorsearch_v1beta.Search(text_search=text_search)\n",
+ " ],\n",
+ " combine=vectorsearch_v1beta.BatchSearchDataObjectsRequest.CombineResultsOptions(\n",
+ " ranker=vectorsearch_v1beta.Ranker(\n",
+ " rrf=vectorsearch_v1beta.ReciprocalRankFusion(weights=[0.6, 0.4])\n",
+ " )\n",
+ " )\n",
+ " )\n",
+ "\n",
+ " response = search_client.batch_search_data_objects(request=request)\n",
+ "\n",
+ " # Format Results\n",
+ " results = []\n",
+ " if response.results and response.results[0].results:\n",
+ " for res in response.results[0].results:\n",
+ " data = res.data_object.data\n",
+ " results.append({\n",
+ " \"name\": data.get(\"name\"),\n",
+ " \"price\": data.get(\"price\"),\n",
+ " \"neighborhood\": data.get(\"neighborhood\"),\n",
+ " \"url\": data.get(\"listing_url\")\n",
+ " })\n",
+ "\n",
+ " print(f\" Found: {len(results)} listings\")\n",
+ " return results\n",
+ "\n",
+ " except Exception as e:\n",
+ " print(f\" Error: {e}\")\n",
+ " return []"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "zQFW9sspbcAJ",
+ "metadata": {
+ "id": "zQFW9sspbcAJ"
+ },
+ "source": [
+ "---\n",
+ "\n",
+ "# Part 6: Build the ADK Agent\n",
+ "\n",
+ "## Create the ADK Travel Agent\n",
+ "\n",
+ "Now we bring everything together! We'll use the **[Agent Development Kit (ADK)](https://google.github.io/adk-docs/)** to create an AI agent that:\n",
+ "\n",
+ "1. **Understands user intent**: Parses natural language requests\n",
+ "2. **Constructs filters**: Generates appropriate metadata filters from the conversation\n",
+ "3. **Calls our tool**: Invokes `find_rentals` with the right parameters\n",
+ "4. **Summarizes results**: Presents findings in a helpful format\n",
+ "\n",
+ "The agent's instructions teach it how to use the filter syntax and what fields are available."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "9BY8iDY8bcAJ",
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "9BY8iDY8bcAJ",
+ "outputId": "1dfbfa07-81ef-4f26-f809-7b59542467dc"
+ },
+ "outputs": [],
+ "source": [
+ "from google.adk.agents import Agent\n",
+ "from google.adk.runners import Runner\n",
+ "from google.adk.sessions import InMemorySessionService\n",
+ "from google.genai import types\n",
+ "\n",
+ "# Session service for managing conversation state\n",
+ "session_service = InMemorySessionService()\n",
+ "\n",
+ "# Helper function for running the agent in the notebook\n",
+ "async def run_agent(query: str, agent: Agent):\n",
+ " \"\"\"Run the agent with a user query and print the response.\"\"\"\n",
+ " print(f\"\\n{'='*60}\")\n",
+ " print(f\"USER: {query}\")\n",
+ " \n",
+ " session = await session_service.create_session(app_name=\"travel_agent\", user_id=\"user_1\")\n",
+ " runner = Runner(app_name=\"travel_agent\", agent=agent, session_service=session_service)\n",
+ " content = types.Content(role='user', parts=[types.Part(text=query)])\n",
+ "\n",
+ " async for event in runner.run_async(user_id=\"user_1\", session_id=session.id, new_message=content):\n",
+ " if event.is_final_response():\n",
+ " if event.content and event.content.parts:\n",
+ " print(f\"\\nAGENT: {event.content.parts[0].text}\")\n",
+ " break\n",
+ " print(f\"{'='*60}\")\n",
+ "\n",
+ "# Agent Instructions\n",
+ "# These teach the agent how to use the find_rentals tool effectively\n",
+ "AGENT_INSTRUCTION = '''\n",
+ "You are an expert London Travel Agent helping users find vacation rentals.\n",
+ "\n",
+ "You have access to a tool called `find_rentals` with two arguments:\n",
+ "1. `query`: A description of the vibe/place (e.g., \"artist loft\", \"garden flat\", \"cozy workspace\")\n",
+ "2. `filter`: A JSON string to filter results by metadata\n",
+ "\n",
+ "### AVAILABLE FILTER FIELDS\n",
+ "- `price` (number): Price per night in GBP\n",
+ "- `neighborhood` (string): London neighborhood (e.g., \"Hackney\", \"Islington\", \"Camden\")\n",
+ "- `instant_bookable` (string): \"t\" for instantly bookable, \"f\" for requires approval\n",
+ "\n",
+ "### FILTER SYNTAX EXAMPLES\n",
+ "```\n",
+ "Price under £200: {\"price\": {\"$lt\": 200.0}}\n",
+ "Specific neighborhood: {\"neighborhood\": {\"$eq\": \"Hackney\"}}\n",
+ "Price range: {\"$and\": [{\"price\": {\"$gte\": 100}}, {\"price\": {\"$lte\": 300}}]}\n",
+ "Hackney + Instant Book: {\"$and\": [{\"neighborhood\": {\"$eq\": \"Hackney\"}}, {\"instant_bookable\": {\"$eq\": \"t\"}}]}\n",
+ "Complex filter: {\"$and\": [{\"neighborhood\": {\"$eq\": \"Hackney\"}}, {\"price\": {\"$lt\": 200}}, {\"instant_bookable\": {\"$eq\": \"t\"}}]}\n",
+ "```\n",
+ "\n",
+ "### GUIDELINES\n",
+ "- Extract the semantic/vibe part of the request for the `query` parameter\n",
+ "- Extract price, location, and booking constraints for the `filter` parameter\n",
+ "- Always summarize the results in a friendly, helpful manner\n",
+ "- Include prices and URLs when available\n",
+ "'''\n",
+ "\n",
+ "# Create the Agent\n",
+ "travel_agent = Agent(\n",
+ " model='gemini-2.5-flash',\n",
+ " name='travel_agent',\n",
+ " instruction=AGENT_INSTRUCTION,\n",
+ " tools=[find_rentals], # Bind our Vector Search tool\n",
+ ")\n",
+ "\n",
+ "print(\"Travel Agent initialized and ready!\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "czrVuArybcAJ",
+ "metadata": {
+ "id": "czrVuArybcAJ"
+ },
+ "source": [
+ "---\n",
+ "\n",
+ "# Part 7: Test the Agent\n",
+ "\n",
+ "Now let's test our agent with various queries! The agent will:\n",
+ "1. Parse your natural language request\n",
+ "2. Construct appropriate filters from constraints (price, location, booking)\n",
+ "3. Call the `find_rentals` tool with hybrid search\n",
+ "4. Summarize the results\n",
+ "\n",
+ "## Example Queries"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "4v1k1htmwl7",
+ "metadata": {},
+ "source": [
+ "### Test 1: Simple Query with Location Filter\n",
+ "\n",
+ "Let's test a basic query. The agent should:\n",
+ "\n",
+ "- Extract \"Hackney\" as a neighborhood filter\n",
+ "- Use \"inspiring workspace\" as the semantic query\n",
+ "- Return listings that match both criteria"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "yLesZJfDehzK",
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "yLesZJfDehzK",
+ "outputId": "c739b5ee-af81-440f-ceac-4e92f42acd40"
+ },
+ "outputs": [],
+ "source": [
+ "await run_agent(\"I want an inspiring workspace in Hackney\", travel_agent)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "4nzmal7f69m",
+ "metadata": {},
+ "source": [
+ "### Test 2: Complex Query with Multiple Filters\n",
+ "\n",
+ "This query has multiple constraints. The agent should construct a compound filter:\n",
+ "\n",
+ "- `neighborhood`: \"Hackney\"\n",
+ "- `price`: less than 200\n",
+ "- `instant_bookable`: \"t\"\n",
+ "\n",
+ "And use \"creative artist workspace\" as the semantic query."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "CssMRUs5bcAJ",
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "CssMRUs5bcAJ",
+ "outputId": "9d0bad97-bc44-47fe-bee7-ff29ee39b34f"
+ },
+ "outputs": [],
+ "source": [
+ "await run_agent(\n",
+ " \"Find me a creative artist workspace in Hackney under £200 that I can book instantly.\",\n",
+ " travel_agent\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "plgj1sbiyy",
+ "metadata": {},
+ "source": [
+ "### Test 3: Different Phrasing, Same Intent\n",
+ "\n",
+ "This query has the same constraints as Test 2 but with different phrasing. The agent should produce similar results, demonstrating its ability to understand varied natural language expressions of the same intent."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "aEMgLhulelU7",
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "aEMgLhulelU7",
+ "outputId": "d5cdce72-7b19-4e03-d99d-cff4c4f56fb8"
+ },
+ "outputs": [],
+ "source": [
+ "await run_agent(\n",
+ " \"Find me a place in Hackney under £200 that I can book instantly. I want a creative artist vibe.\",\n",
+ " travel_agent\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "crsw87d68",
+ "metadata": {},
+ "source": [
+ "---\n",
+ "\n",
+ "# Part 8: Cleanup\n",
+ "\n",
+ "## Clean Up Resources\n",
+ "\n",
+ "Vector Search 2.0 resources incur costs when active. Run the cell below to delete the Collection and all its data.\n",
+ "\n",
+ "**Note**: Data Objects must be deleted before the Collection can be deleted. The cleanup code handles this automatically by:\n",
+ "1. Querying and deleting all Data Objects in batches\n",
+ "2. Deleting the Collection after all objects are removed\n",
+ "\n",
+ "> **Warning**: This action is irreversible. All data will be permanently deleted. Set `DELETE_COLLECTION = True` to run the cleanup."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "q9z5m359txh",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "DELETE_COLLECTION = False # @param {type:\"boolean\"}\n",
+ "\n",
+ "if DELETE_COLLECTION:\n",
+ " try:\n",
+ " print(f\"Deleting all Data Objects from '{COLLECTION_ID}'...\")\n",
+ " \n",
+ " # Delete all Data Objects first (required before Collection deletion)\n",
+ " deleted_count = 0\n",
+ " while True:\n",
+ " query_request = vectorsearch_v1beta.QueryDataObjectsRequest(\n",
+ " parent=collection_path,\n",
+ " page_size=100,\n",
+ " output_fields=vectorsearch_v1beta.OutputFields(data_fields=[])\n",
+ " )\n",
+ " results = list(search_client.query_data_objects(query_request))\n",
+ " \n",
+ " if not results:\n",
+ " break\n",
+ " \n",
+ " for obj in results:\n",
+ " try:\n",
+ " delete_request = vectorsearch_v1beta.DeleteDataObjectRequest(\n",
+ " name=obj.name\n",
+ " )\n",
+ " data_client.delete_data_object(delete_request)\n",
+ " deleted_count += 1\n",
+ " except Exception as e:\n",
+ " pass\n",
+ " \n",
+ " print(f\" Deleted {deleted_count} data objects...\")\n",
+ " \n",
+ " print(f\"Deleted {deleted_count} total data objects.\")\n",
+ " \n",
+ " # Now delete the Collection\n",
+ " print(f\"Deleting Collection '{COLLECTION_ID}'...\")\n",
+ " request = vectorsearch_v1beta.DeleteCollectionRequest(\n",
+ " name=collection_path\n",
+ " )\n",
+ " operation = admin_client.delete_collection(request=request)\n",
+ " operation.result()\n",
+ " print(f\"Collection '{COLLECTION_ID}' deleted successfully.\")\n",
+ " \n",
+ " except Exception as e:\n",
+ " print(f\"Error during cleanup: {e}\")\n",
+ "else:\n",
+ " print(\"Cleanup skipped. Set DELETE_COLLECTION = True to delete resources.\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "5id1urfcy",
+ "metadata": {},
+ "source": [
+ "---\n",
+ "\n",
+ "# Summary\n",
+ "\n",
+ "In this notebook, you learned how to:\n",
+ "\n",
+ "1. **Set up Vector Search 2.0**: Initialize SDK clients and configure your project\n",
+ "2. **Create a Collection**: Define data and vector schemas with auto-embedding configuration\n",
+ "3. **Ingest Data**: Batch upload Data Objects with automatic embedding generation\n",
+ "4. **Implement Hybrid Search**: Combine semantic and keyword search with RRF ranking\n",
+ "5. **Build an Agent**: Use ADK to create an AI agent that autonomously searches your data\n",
+ "6. **Apply Filters**: Use rich query syntax for metadata filtering\n",
+ "\n",
+ "## Key Takeaways\n",
+ "\n",
+ "| Concept | What You Learned |\n",
+ "|---------|-----------------|\n",
+ "| **Collections** | Schema-enforced containers with data + vector schemas |\n",
+ "| **Auto-Embeddings** | Automatic embedding generation via `vertex_embedding_config` |\n",
+ "| **Hybrid Search** | Combine semantic understanding with keyword precision |\n",
+ "| **RRF Ranking** | Reciprocal Rank Fusion for balanced result merging |\n",
+ "| **Filter Syntax** | `$eq`, `$lt`, `$and`, `$or` for metadata filtering |\n",
+ "\n",
+ "## Next Steps\n",
+ "\n",
+ "- **[Vector Search 2.0 Introduction](https://github.com/GoogleCloudPlatform/generative-ai/blob/main/embeddings/vector-search-2-intro.ipynb)**: Deep dive into all Vector Search 2.0 features\n",
+ "- **[ANN Indexes](https://cloud.google.com/vertex-ai/docs/vector-search-2/indexes/indexes)**: Scale to billions of vectors with production-ready performance\n",
+ "- **[ADK Documentation](https://google.github.io/adk-docs/)**: Learn more about building AI agents\n",
+ "- **[Vector Search 2.0 Documentation](https://cloud.google.com/vertex-ai/docs/vector-search-2/overview)**: Complete API reference"
+ ]
+ }
+ ],
+ "metadata": {
+ "colab": {
+ "provenance": []
+ },
+ "kernelspec": {
+ "display_name": "Python 3",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.10.12"
+ },
+ "widgets": {
+ "application/vnd.jupyter.widget-state+json": {
+ "2241c908443f4890a6037e08844fbc8f": {
+ "model_module": "@jupyter-widgets/base",
+ "model_module_version": "1.2.0",
+ "model_name": "LayoutModel",
+ "state": {
+ "_model_module": "@jupyter-widgets/base",
+ "_model_module_version": "1.2.0",
+ "_model_name": "LayoutModel",
+ "_view_count": null,
+ "_view_module": "@jupyter-widgets/base",
+ "_view_module_version": "1.2.0",
+ "_view_name": "LayoutView",
+ "align_content": null,
+ "align_items": null,
+ "align_self": null,
+ "border": null,
+ "bottom": null,
+ "display": null,
+ "flex": null,
+ "flex_flow": null,
+ "grid_area": null,
+ "grid_auto_columns": null,
+ "grid_auto_flow": null,
+ "grid_auto_rows": null,
+ "grid_column": null,
+ "grid_gap": null,
+ "grid_row": null,
+ "grid_template_areas": null,
+ "grid_template_columns": null,
+ "grid_template_rows": null,
+ "height": null,
+ "justify_content": null,
+ "justify_items": null,
+ "left": null,
+ "margin": null,
+ "max_height": null,
+ "max_width": null,
+ "min_height": null,
+ "min_width": null,
+ "object_fit": null,
+ "object_position": null,
+ "order": null,
+ "overflow": null,
+ "overflow_x": null,
+ "overflow_y": null,
+ "padding": null,
+ "right": null,
+ "top": null,
+ "visibility": null,
+ "width": null
+ }
+ },
+ "2361eea831a74613a6b6f52c6d898a48": {
+ "model_module": "@jupyter-widgets/controls",
+ "model_module_version": "1.5.0",
+ "model_name": "FloatProgressModel",
+ "state": {
+ "_dom_classes": [],
+ "_model_module": "@jupyter-widgets/controls",
+ "_model_module_version": "1.5.0",
+ "_model_name": "FloatProgressModel",
+ "_view_count": null,
+ "_view_module": "@jupyter-widgets/controls",
+ "_view_module_version": "1.5.0",
+ "_view_name": "ProgressView",
+ "bar_style": "success",
+ "description": "",
+ "description_tooltip": null,
+ "layout": "IPY_MODEL_db5c36e6ea6d4449be60ecf4a6639651",
+ "max": 20,
+ "min": 0,
+ "orientation": "horizontal",
+ "style": "IPY_MODEL_5298691c16c34017b0659b60d741186e",
+ "value": 20
+ }
+ },
+ "3798006bedeb4618a307c0f109b721ef": {
+ "model_module": "@jupyter-widgets/base",
+ "model_module_version": "1.2.0",
+ "model_name": "LayoutModel",
+ "state": {
+ "_model_module": "@jupyter-widgets/base",
+ "_model_module_version": "1.2.0",
+ "_model_name": "LayoutModel",
+ "_view_count": null,
+ "_view_module": "@jupyter-widgets/base",
+ "_view_module_version": "1.2.0",
+ "_view_name": "LayoutView",
+ "align_content": null,
+ "align_items": null,
+ "align_self": null,
+ "border": null,
+ "bottom": null,
+ "display": null,
+ "flex": null,
+ "flex_flow": null,
+ "grid_area": null,
+ "grid_auto_columns": null,
+ "grid_auto_flow": null,
+ "grid_auto_rows": null,
+ "grid_column": null,
+ "grid_gap": null,
+ "grid_row": null,
+ "grid_template_areas": null,
+ "grid_template_columns": null,
+ "grid_template_rows": null,
+ "height": null,
+ "justify_content": null,
+ "justify_items": null,
+ "left": null,
+ "margin": null,
+ "max_height": null,
+ "max_width": null,
+ "min_height": null,
+ "min_width": null,
+ "object_fit": null,
+ "object_position": null,
+ "order": null,
+ "overflow": null,
+ "overflow_x": null,
+ "overflow_y": null,
+ "padding": null,
+ "right": null,
+ "top": null,
+ "visibility": null,
+ "width": null
+ }
+ },
+ "5298691c16c34017b0659b60d741186e": {
+ "model_module": "@jupyter-widgets/controls",
+ "model_module_version": "1.5.0",
+ "model_name": "ProgressStyleModel",
+ "state": {
+ "_model_module": "@jupyter-widgets/controls",
+ "_model_module_version": "1.5.0",
+ "_model_name": "ProgressStyleModel",
+ "_view_count": null,
+ "_view_module": "@jupyter-widgets/base",
+ "_view_module_version": "1.2.0",
+ "_view_name": "StyleView",
+ "bar_color": null,
+ "description_width": ""
+ }
+ },
+ "729867ba038a4f2ab2d7e323623e95ef": {
+ "model_module": "@jupyter-widgets/controls",
+ "model_module_version": "1.5.0",
+ "model_name": "HBoxModel",
+ "state": {
+ "_dom_classes": [],
+ "_model_module": "@jupyter-widgets/controls",
+ "_model_module_version": "1.5.0",
+ "_model_name": "HBoxModel",
+ "_view_count": null,
+ "_view_module": "@jupyter-widgets/controls",
+ "_view_module_version": "1.5.0",
+ "_view_name": "HBoxView",
+ "box_style": "",
+ "children": [
+ "IPY_MODEL_fc828f9e81b14681aa2c384363d35ba0",
+ "IPY_MODEL_2361eea831a74613a6b6f52c6d898a48",
+ "IPY_MODEL_f4b85d92baaa4ef1bb4c230ecdc76f92"
+ ],
+ "layout": "IPY_MODEL_3798006bedeb4618a307c0f109b721ef"
+ }
+ },
+ "7894bc430f4a47a0a80839e7493907fb": {
+ "model_module": "@jupyter-widgets/controls",
+ "model_module_version": "1.5.0",
+ "model_name": "DescriptionStyleModel",
+ "state": {
+ "_model_module": "@jupyter-widgets/controls",
+ "_model_module_version": "1.5.0",
+ "_model_name": "DescriptionStyleModel",
+ "_view_count": null,
+ "_view_module": "@jupyter-widgets/base",
+ "_view_module_version": "1.2.0",
+ "_view_name": "StyleView",
+ "description_width": ""
+ }
+ },
+ "95404dacf88546929a3c7d43445387c2": {
+ "model_module": "@jupyter-widgets/base",
+ "model_module_version": "1.2.0",
+ "model_name": "LayoutModel",
+ "state": {
+ "_model_module": "@jupyter-widgets/base",
+ "_model_module_version": "1.2.0",
+ "_model_name": "LayoutModel",
+ "_view_count": null,
+ "_view_module": "@jupyter-widgets/base",
+ "_view_module_version": "1.2.0",
+ "_view_name": "LayoutView",
+ "align_content": null,
+ "align_items": null,
+ "align_self": null,
+ "border": null,
+ "bottom": null,
+ "display": null,
+ "flex": null,
+ "flex_flow": null,
+ "grid_area": null,
+ "grid_auto_columns": null,
+ "grid_auto_flow": null,
+ "grid_auto_rows": null,
+ "grid_column": null,
+ "grid_gap": null,
+ "grid_row": null,
+ "grid_template_areas": null,
+ "grid_template_columns": null,
+ "grid_template_rows": null,
+ "height": null,
+ "justify_content": null,
+ "justify_items": null,
+ "left": null,
+ "margin": null,
+ "max_height": null,
+ "max_width": null,
+ "min_height": null,
+ "min_width": null,
+ "object_fit": null,
+ "object_position": null,
+ "order": null,
+ "overflow": null,
+ "overflow_x": null,
+ "overflow_y": null,
+ "padding": null,
+ "right": null,
+ "top": null,
+ "visibility": null,
+ "width": null
+ }
+ },
+ "9a471edce9c2470ba216420675277f51": {
+ "model_module": "@jupyter-widgets/controls",
+ "model_module_version": "1.5.0",
+ "model_name": "DescriptionStyleModel",
+ "state": {
+ "_model_module": "@jupyter-widgets/controls",
+ "_model_module_version": "1.5.0",
+ "_model_name": "DescriptionStyleModel",
+ "_view_count": null,
+ "_view_module": "@jupyter-widgets/base",
+ "_view_module_version": "1.2.0",
+ "_view_name": "StyleView",
+ "description_width": ""
+ }
+ },
+ "db5c36e6ea6d4449be60ecf4a6639651": {
+ "model_module": "@jupyter-widgets/base",
+ "model_module_version": "1.2.0",
+ "model_name": "LayoutModel",
+ "state": {
+ "_model_module": "@jupyter-widgets/base",
+ "_model_module_version": "1.2.0",
+ "_model_name": "LayoutModel",
+ "_view_count": null,
+ "_view_module": "@jupyter-widgets/base",
+ "_view_module_version": "1.2.0",
+ "_view_name": "LayoutView",
+ "align_content": null,
+ "align_items": null,
+ "align_self": null,
+ "border": null,
+ "bottom": null,
+ "display": null,
+ "flex": null,
+ "flex_flow": null,
+ "grid_area": null,
+ "grid_auto_columns": null,
+ "grid_auto_flow": null,
+ "grid_auto_rows": null,
+ "grid_column": null,
+ "grid_gap": null,
+ "grid_row": null,
+ "grid_template_areas": null,
+ "grid_template_columns": null,
+ "grid_template_rows": null,
+ "height": null,
+ "justify_content": null,
+ "justify_items": null,
+ "left": null,
+ "margin": null,
+ "max_height": null,
+ "max_width": null,
+ "min_height": null,
+ "min_width": null,
+ "object_fit": null,
+ "object_position": null,
+ "order": null,
+ "overflow": null,
+ "overflow_x": null,
+ "overflow_y": null,
+ "padding": null,
+ "right": null,
+ "top": null,
+ "visibility": null,
+ "width": null
+ }
+ },
+ "f4b85d92baaa4ef1bb4c230ecdc76f92": {
+ "model_module": "@jupyter-widgets/controls",
+ "model_module_version": "1.5.0",
+ "model_name": "HTMLModel",
+ "state": {
+ "_dom_classes": [],
+ "_model_module": "@jupyter-widgets/controls",
+ "_model_module_version": "1.5.0",
+ "_model_name": "HTMLModel",
+ "_view_count": null,
+ "_view_module": "@jupyter-widgets/controls",
+ "_view_module_version": "1.5.0",
+ "_view_name": "HTMLView",
+ "description": "",
+ "description_tooltip": null,
+ "layout": "IPY_MODEL_2241c908443f4890a6037e08844fbc8f",
+ "placeholder": "",
+ "style": "IPY_MODEL_9a471edce9c2470ba216420675277f51",
+ "value": " 20/20 [01:13<00:00, 3.67s/it]"
+ }
+ },
+ "fc828f9e81b14681aa2c384363d35ba0": {
+ "model_module": "@jupyter-widgets/controls",
+ "model_module_version": "1.5.0",
+ "model_name": "HTMLModel",
+ "state": {
+ "_dom_classes": [],
+ "_model_module": "@jupyter-widgets/controls",
+ "_model_module_version": "1.5.0",
+ "_model_name": "HTMLModel",
+ "_view_count": null,
+ "_view_module": "@jupyter-widgets/controls",
+ "_view_module_version": "1.5.0",
+ "_view_name": "HTMLView",
+ "description": "",
+ "description_tooltip": null,
+ "layout": "IPY_MODEL_95404dacf88546929a3c7d43445387c2",
+ "placeholder": "",
+ "style": "IPY_MODEL_7894bc430f4a47a0a80839e7493907fb",
+ "value": "100%"
+ }
+ }
+ }
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}