forked from cube-js/cube
-
Notifications
You must be signed in to change notification settings - Fork 0
Feature/ADBC Server #2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
borodark
wants to merge
100
commits into
master
Choose a base branch
from
feature/arrow-ipc-api
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…rison Updates test output and messaging to emphasize performance comparison between CubeSQL (with query caching) and standard REST HTTP API, rather than focusing on the PostgreSQL proxy implementation details. Changes: - Rename test suite title from 'Arrow IPC' to 'CubeSQL' - Update all test output to say 'CubeSQL vs REST HTTP API' - Clarify that we're measuring cache effectiveness vs HTTP performance - Remove references to 'Arrow IPC' proxy implementation details This better reflects the user-facing value proposition: CubeSQL with caching provides significant performance improvements over REST API. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Enhances performance tests to measure complete end-to-end timing including client-side data materialization (converting results to usable format). Changes: - Track query time, materialization time, and total time separately - Simulate DataFrame creation (convert to list of dicts) - Show detailed breakdown in test output - Measure realistic client-side overhead Results show materialization overhead is minimal: - 200 rows: 0ms - 2K rows: 3ms - 10K rows: 15ms Total speedup (including materialization): - Cache miss → hit: 3.3x faster - CubeSQL vs REST API: 8.2x average This provides a more accurate picture of real-world performance gains from the client's perspective. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
…on setup Creates complete documentation suite and test infrastructure for the Arrow IPC query cache feature, enabling easy local verification. New Documentation: - ARCHITECTURE.md: Complete technical overview of cache implementation - GETTING_STARTED.md: 5-minute quick start guide - LOCAL_VERIFICATION.md: Step-by-step PR verification guide - README.md: Updated with links to all resources Test Infrastructure: - setup_test_data.sh: Automated script to load sample data - sample_data.sql.gz: 3000 sample orders (240KB compressed) - Enables anyone to reproduce performance results locally Changes: - Moved 19 development MD files to power-of-three-examples/doc/archive/ - Created essential user-facing documentation - Added sample data for testing - Documented complete local verification workflow Users can now: 1. Clone the repo 2. Run ./setup_test_data.sh 3. Start services 4. Run python test_arrow_cache_performance.py 5. Verify 8-15x performance improvement All documentation cross-references for easy navigation. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Repositions documentation to emphasize CubeSQL's Arrow Native Server as the primary feature, with query caching as an optional optimization. Changes: - Update all MDs to lead with 'Arrow Native Server' - Position cache as optional, not the main story - Emphasize binary protocol and PostgreSQL compatibility - Show cache as transparent optimization that can be disabled - Clarify two protocol options: PostgreSQL wire (4444) + Arrow IPC (4445) Key messaging changes: - Before: 'Arrow IPC Query Cache' - After: 'CubeSQL Arrow Native Server with Optional Cache' This better reflects the architecture: 1. Arrow Native server (primary feature) 2. Binary protocol efficiency 3. PostgreSQL compatibility 4. Optional query cache (performance boost) Documentation now shows cache as an additive feature that enhances the base Arrow Native server, not as the core functionality. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
PostgreSQL wire protocol (port 4444) was already working. This PR specifically introduces: - Arrow IPC native protocol (port 4445) - Optional query result cache
Port 4444 (PostgreSQL wire protocol) was already there. Port 4445 (Arrow IPC native) is what this PR introduces.
d03d5cd to
e955992
Compare
… MetaContext::new() Upstream added a second parameter `pre_aggregations: Vec<PreAggregationMeta>` to MetaContext::new() but the call in transport.rs wasn't updated. This fix: - Imports parse_pre_aggregations_from_cubes() function - Extracts pre-aggregations from cube metadata before creating MetaContext - Passes pre_aggregations as the 2nd parameter to MetaContext::new() Matches the implementation in cubesql's cubestore_transport.rs and service.rs. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
backend:cli
client:core
client:ngx
client:playground
client:react
client:vue
cube store
cubejs-jdbc-driver
data source driver
driver:athena
driver:bigquery
driver:clickhouse
driver:crate
driver:databricks
driver:dremio
driver:druid
driver:duckdb
driver:elasticsearch
driver:firebolt
driver:hive
driver:materialize
driver:mongodb
driver:mssql
driver:mysql
driver:mysql-aurora-serverless
driver:oracle
driver:pinot
driver:postgres
driver:prestodb
driver:questdb
driver:redshift
driver:snowflake
driver:sqlite
driver:trino
javascript
python
rust
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Check List
ADBC Access to Cube
The Problem: I need Cube to serve data over ADBC
Side effect: Reading pre-aggregates from CubeStore bypassing cache - pessimistic read
Some numbers
Speedup increases with result set size because columnar format amortizes overhead.
ADBC vs HTTP (Cold Starting API Server).
AKA Ludicrous speed
Average Speedup: 856.2x
ADBC vs HTTP (Warmed Up API Server).
AKA All caches are loaded
Average Speedup: 53.6x
Reading pre-aggregates from CubeStore bypassing cache
AKA pessimistic read
Average Speedup: 0.3x
Over the network ADBC vs HTTP (Warmed Up API Server).
Over WiFi All caches are loaded
Average Speedup: 11.1x
Reading pre-aggregates from CubeStore bypassing cache over network
Over the network pessimistic read
Average Speedup: 0.9x
2. Type-Preserving Data Transfer
This isn't just aesthetic—columnar tools perform 2-5x faster with properly typed data.
Use Cases
Fact: The legal limit of Cube Result Set is 50000
Data Science Pipelines
Get query results directly into pandas/polars without serialization overhead:
Real-Time Dashboards
Reduce query-to-visualization latency for dashboards with large result sets.
Data Engineering
Integrate Cube semantic layer with Arrow-native tools:
Complete example with:
Breaking Changes
None. This is a pure addition. Default behavior unchanged.
Checklist
Future Work (Not in This PR)
Batch by 50K and stream, perhaps?
The Ask
This PR demonstrates measurable performance improvements (2-5x for typical analytics queries) with zero breaking changes and full backward compatibility. The implementation is clean, tested, and documented with working examples in three languages.
Would love to discuss:
The future of data transfer is columnar. Let's bring CubeSQL along for the ride. 🚀
Related Issues: [Reference any relevant issues]
Demo Video: [Optional - link to demo]
Live Example: See
examples/recipes/arrow-ipc/for complete working code