From dd6589dd607e9136eece9d3f73ee70b122e7c3ad Mon Sep 17 00:00:00 2001 From: Drew Robinson Date: Wed, 31 Dec 2025 10:36:13 +1100 Subject: [PATCH 1/4] Add comprehensive tests for replication, statement introspection, and connection features - el-qvs: Expanded statement_features_test.exs with edge case tests - Added tests for all data types (INTEGER, TEXT, BLOB, REAL, NUMERIC) - Added tests for type casting, UNION queries, and CASE expressions - 35 tests passing - el-g5l: Created replication_integration_test.exs with 24 test scenarios - Tests for frame number tracking (get_frame_number_for_replica) - Tests for frame-specific sync (sync_until_frame) - Tests for flush pending writes (flush_and_get_frame) - Tests for max write tracking (max_write_replication_index) - Tests skip when TURSO_DB_URI/TOKEN not configured - el-i0v: Expanded connection_features_test.exs with functional tests - Added 11 new reset connection tests covering reuse, state clearing, etc. - Added 7 new interrupt tests covering multiple calls, isolation, transactions - 17 tests passing --- .beads/issues.jsonl | 6 +- test/connection_features_test.exs | 316 +++++++++++++++++ test/replication_integration_test.exs | 492 ++++++++++++++++++++++++++ test/statement_features_test.exs | 132 ++++++- 4 files changed, 942 insertions(+), 4 deletions(-) create mode 100644 test/replication_integration_test.exs diff --git a/.beads/issues.jsonl b/.beads/issues.jsonl index c42c4ed..2168db1 100644 --- a/.beads/issues.jsonl +++ b/.beads/issues.jsonl @@ -15,15 +15,15 @@ {"id":"el-e42","title":"Add Performance Benchmark Tests","description":"Create comprehensive performance benchmarks to track ecto_libsql performance and identify bottlenecks.\n\n**Context**: No performance benchmarks exist. Need to establish baselines and track performance across versions. Critical for validating performance improvements (like statement reset fix).\n\n**Benchmark Categories**:\n\n**1. Prepared Statement Performance**:\n```elixir\n# Measure impact of statement re-preparation bug\nbenchmark \"prepared statement execution\" do\n stmt = prepare(\"INSERT INTO bench VALUES (?, ?)\")\n \n # Before fix: ~30-50% slower\n # After fix: baseline\n Benchee.run(%{\n \"100 executions\" =\u003e fn -\u003e \n for i \u003c- 1..100, do: execute(stmt, [i, \"data\"])\n end\n })\nend\n```\n\n**2. Cursor Streaming Memory**:\n```elixir\nbenchmark \"cursor memory usage\" do\n # Current: Loads all into memory\n # After streaming fix: Constant memory\n \n cursor = declare_cursor(\"SELECT * FROM large_table\")\n \n :erlang.garbage_collect()\n {memory_before, _} = :erlang.process_info(self(), :memory)\n \n Enum.take(cursor, 100)\n \n {memory_after, _} = :erlang.process_info(self(), :memory)\n memory_used = memory_after - memory_before\n \n # Assert memory \u003c 10MB for 1M row table\n assert memory_used \u003c 10_000_000\nend\n```\n\n**3. Concurrent Connections**:\n```elixir\nbenchmark \"concurrent connections\" do\n Benchee.run(%{\n \"10 connections\" =\u003e fn -\u003e parallel_queries(10) end,\n \"50 connections\" =\u003e fn -\u003e parallel_queries(50) end,\n \"100 connections\" =\u003e fn -\u003e parallel_queries(100) end,\n })\nend\n```\n\n**4. Transaction Throughput**:\n```elixir\nbenchmark \"transaction throughput\" do\n Benchee.run(%{\n \"1000 transactions/sec\" =\u003e fn -\u003e\n for i \u003c- 1..1000 do\n Repo.transaction(fn -\u003e\n Repo.query(\"INSERT INTO bench VALUES (?)\", [i])\n end)\n end\n end\n })\nend\n```\n\n**5. Batch Operations**:\n```elixir\nbenchmark \"batch operations\" do\n queries = for i \u003c- 1..1000, do: \"INSERT INTO bench VALUES (\\#{i})\"\n \n Benchee.run(%{\n \"manual batch\" =\u003e fn -\u003e execute_batch(queries) end,\n \"native batch\" =\u003e fn -\u003e execute_batch_native(queries) end,\n \"transactional batch\" =\u003e fn -\u003e execute_transactional_batch(queries) end,\n })\nend\n```\n\n**6. Statement Cache Performance**:\n```elixir\nbenchmark \"statement cache\" do\n Benchee.run(%{\n \"1000 unique statements\" =\u003e fn -\u003e\n for i \u003c- 1..1000 do\n prepare(\"SELECT * FROM bench WHERE id = \\#{i}\")\n end\n end\n })\nend\n```\n\n**7. Replication Sync Performance**:\n```elixir\nbenchmark \"replica sync\" do\n # Write to primary\n for i \u003c- 1..10000, do: insert_on_primary(i)\n \n # Measure sync time\n Benchee.run(%{\n \"sync 10K changes\" =\u003e fn -\u003e \n sync(replica)\n end\n })\nend\n```\n\n**Implementation**:\n\n1. **Add benchee dependency** (mix.exs):\n ```elixir\n {:benchee, \"~\u003e 1.3\", only: :dev}\n {:benchee_html, \"~\u003e 1.0\", only: :dev}\n ```\n\n2. **Create benchmark files**:\n - benchmarks/prepared_statements_bench.exs\n - benchmarks/cursor_streaming_bench.exs\n - benchmarks/concurrent_connections_bench.exs\n - benchmarks/transactions_bench.exs\n - benchmarks/batch_operations_bench.exs\n - benchmarks/statement_cache_bench.exs\n - benchmarks/replication_bench.exs\n\n3. **Add benchmark runner** (mix.exs):\n ```elixir\n def cli do\n [\n aliases: [\n bench: \"run benchmarks/**/*_bench.exs\"\n ]\n ]\n end\n ```\n\n4. **CI Integration**:\n - Run benchmarks on PRs\n - Track performance over time\n - Alert on regression \u003e 20%\n\n**Baseline Targets** (to establish):\n- Prepared statement execution: X ops/sec\n- Cursor streaming: Y MB memory for Z rows\n- Transaction throughput: 1000+ txn/sec\n- Concurrent connections: 100 connections\n- Batch operations: Native 20-30% faster than manual\n\n**Files**:\n- mix.exs (add benchee dependency)\n- benchmarks/*.exs (benchmark files)\n- .github/workflows/benchmarks.yml (CI integration)\n- PERFORMANCE.md (document baselines and results)\n\n**Acceptance Criteria**:\n- [ ] Benchee dependency added\n- [ ] 7 benchmark categories implemented\n- [ ] Benchmarks run via mix bench\n- [ ] HTML reports generated\n- [ ] Baselines documented in PERFORMANCE.md\n- [ ] CI runs benchmarks on PRs\n- [ ] Regression alerts configured\n\n**Test Requirements**:\n```bash\n# Run all benchmarks\nmix bench\n\n# Run specific benchmark\nmix run benchmarks/prepared_statements_bench.exs\n\n# Generate HTML report\nmix run benchmarks/prepared_statements_bench.exs --format html\n```\n\n**Benefits**:\n- Track performance across versions\n- Validate performance improvements\n- Identify bottlenecks\n- Catch regressions early\n- Document performance characteristics\n\n**References**:\n- FEATURE_CHECKLIST.md section \"Test Coverage Priorities\" item 6\n- LIBSQL_FEATURE_COMPARISON.md section \"Performance and Stress Tests\"\n\n**Dependencies**:\n- Validates fixes for el-2ry (statement performance bug)\n- Validates fixes for el-aob (streaming cursors)\n\n**Priority**: P3 - Nice to have, tracks quality over time\n**Effort**: 2-3 days","status":"open","priority":3,"issue_type":"task","created_at":"2025-12-30T17:46:14.715332+11:00","created_by":"drew","updated_at":"2025-12-30T17:46:14.715332+11:00"} {"id":"el-ffc","title":"EXPLAIN Query Support","description":"Not implemented in ecto_libsql. libSQL 3.45.1 fully supports EXPLAIN and EXPLAIN QUERY PLAN for query optimiser insight.\n\nDesired API:\n query = from u in User, where: u.age \u003e 18\n {:ok, plan} = Repo.explain(query)\n # Or: Ecto.Adapters.SQL.explain(Repo, :all, query)\n\nPRIORITY: Recommended as #3 in implementation order - quick win for debugging.\n\nEffort: 2-3 days.","status":"open","priority":2,"issue_type":"feature","created_at":"2025-12-30T17:35:52.299542+11:00","created_by":"drew","updated_at":"2025-12-30T17:43:32.763016+11:00"} {"id":"el-fpi","title":"Fix binary data round-trip property test failure for single null byte","description":"## Problem\n\nThe property test for binary data handling is failing when the generated binary is a single null byte ().\n\n## Failure Details\n\n\n\n**File**: test/fuzz_test.exs:736\n**Test**: property binary data handling round-trips binary data correctly\n\n## Root Cause\n\nWhen a single null byte () is stored in the database as a BLOB and retrieved, it's being returned as an empty string () instead of the original binary.\n\nThis suggests a potential issue with:\n1. Binary encoding/decoding in the Rust NIF layer (decode.rs)\n2. Type conversion in the Elixir loaders/dumpers\n3. Handling of edge case binaries (single null byte, empty blobs)\n\n## Impact\n\n- Property-based test failures indicate the binary data handling isn't robust for all valid binary inputs\n- Applications storing binary data with null bytes may experience data corruption\n- Affects blob storage reliability\n\n## Reproduction\n\n\n\n## Investigation Areas\n\n1. **native/ecto_libsql/src/decode.rs** - Check Value::Blob conversion\n2. **lib/ecto/adapters/libsql.ex** - Check binary loaders/dumpers\n3. **native/ecto_libsql/src/query.rs** - Verify blob retrieval logic\n4. **Test edge cases**: , , , \n\n## Expected Behavior\n\nAll binaries (including single null byte) should round-trip correctly:\n- Store → Retrieve \n- Store → Retrieve \n- Store → Retrieve \n\n## Related Code\n\n- test/fuzz_test.exs:736-753\n- native/ecto_libsql/src/decode.rs (blob handling)\n- lib/ecto/adapters/libsql.ex (type loaders/dumpers)","status":"in_progress","priority":1,"issue_type":"bug","created_at":"2025-12-30T18:05:52.838065+11:00","created_by":"drew","updated_at":"2025-12-30T21:59:49.842445+11:00"} -{"id":"el-g5l","title":"Replication Integration Tests","description":"Add comprehensive integration tests for replication features.\n\n**Context**: Replication features are implemented but have minimal test coverage (marked as ⚠️ in FEATURE_CHECKLIST.md).\n\n**Required Tests** (test/replication_integration_test.exs):\n- sync_until() - frame-specific sync\n- flush_replicator() - force pending writes \n- max_write_replication_index() - write tracking\n- replication_index() - current frame tracking\n\n**Test Scenarios**:\n1. Monitor replication lag via frame numbers\n2. Sync to specific frame number\n3. Flush pending writes and verify frame number\n4. Track max write frame across operations\n\n**Files**:\n- NEW: test/replication_integration_test.exs\n- Reference: FEATURE_CHECKLIST.md line 212-242\n- Reference: LIBSQL_FEATURE_MATRIX_FINAL.md section 5\n\n**Acceptance Criteria**:\n- [ ] All 4 replication NIFs have comprehensive tests\n- [ ] Tests cover happy path and edge cases\n- [ ] Tests verify frame number progression\n- [ ] Tests validate sync behaviour\n\n**Priority**: P1 - Critical for Turso use cases\n**Effort**: 2-3 days","status":"open","priority":1,"issue_type":"task","created_at":"2025-12-30T17:42:37.162327+11:00","created_by":"drew","updated_at":"2025-12-30T17:42:37.162327+11:00"} +{"id":"el-g5l","title":"Replication Integration Tests","description":"Add comprehensive integration tests for replication features.\n\n**Context**: Replication features are implemented but have minimal test coverage (marked as ⚠️ in FEATURE_CHECKLIST.md).\n\n**Required Tests** (test/replication_integration_test.exs):\n- sync_until() - frame-specific sync\n- flush_replicator() - force pending writes \n- max_write_replication_index() - write tracking\n- replication_index() - current frame tracking\n\n**Test Scenarios**:\n1. Monitor replication lag via frame numbers\n2. Sync to specific frame number\n3. Flush pending writes and verify frame number\n4. Track max write frame across operations\n\n**Files**:\n- NEW: test/replication_integration_test.exs\n- Reference: FEATURE_CHECKLIST.md line 212-242\n- Reference: LIBSQL_FEATURE_MATRIX_FINAL.md section 5\n\n**Acceptance Criteria**:\n- [ ] All 4 replication NIFs have comprehensive tests\n- [ ] Tests cover happy path and edge cases\n- [ ] Tests verify frame number progression\n- [ ] Tests validate sync behaviour\n\n**Priority**: P1 - Critical for Turso use cases\n**Effort**: 2-3 days","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-30T17:42:37.162327+11:00","created_by":"drew","updated_at":"2025-12-31T10:35:01.469259+11:00","closed_at":"2025-12-31T10:35:01.469259+11:00","close_reason":"Closed"} {"id":"el-h48","title":"Table-Valued Functions (via Extensions)","description":"Not implemented. Generate rows from functions, series generation, CSV parsing. Examples: generate_series(1, 10), csv_table(path, schema). Effort: 4-5 days (if building custom extension).","status":"open","priority":4,"issue_type":"feature","created_at":"2025-12-30T17:35:53.485837+11:00","created_by":"drew","updated_at":"2025-12-30T17:36:47.67121+11:00"} -{"id":"el-i0v","title":"Connection Reset and Interrupt Functional Tests","description":"Add comprehensive functional tests for connection reset and interrupt features.\n\n**Context**: reset_connection and interrupt_connection are implemented but only have basic tests (marked as ⚠️ in FEATURE_CHECKLIST.md).\n\n**Required Tests** (expand test/connection_features_test.exs or create new):\n\n**Reset Tests**:\n- Reset maintains database connection\n- Reset allows connection reuse in pool\n- Reset doesn't close active transactions\n- Reset clears temporary state\n- Reset multiple times in succession\n\n**Interrupt Tests**:\n- Interrupt cancels long-running query\n- Interrupt allows query restart after cancellation\n- Interrupt doesn't affect other connections\n- Interrupt during transaction behaviour\n- Concurrent interrupts on different connections\n\n**Files**:\n- EXPAND/NEW: test/connection_features_test.exs\n- Reference: FEATURE_CHECKLIST.md line 267-287\n- Reference: LIBSQL_FEATURE_COMPARISON.md section 3\n\n**Test Examples**:\n```elixir\ntest \"reset maintains database connection\" do\n {:ok, state} = connect()\n {:ok, state} = reset_connection(state)\n # Verify connection still works\n {:ok, _, _, _} = query(state, \"SELECT 1\")\nend\n\ntest \"interrupt cancels long-running query\" do\n {:ok, state} = connect()\n # Start long query in background\n task = Task.async(fn -\u003e query(state, \"SELECT sleep(10)\") end)\n # Interrupt after 100ms\n Process.sleep(100)\n interrupt_connection(state)\n # Verify query was cancelled\n assert {:error, _} = Task.await(task)\nend\n```\n\n**Acceptance Criteria**:\n- [ ] Reset functional tests comprehensive\n- [ ] Interrupt functional tests comprehensive\n- [ ] Tests verify connection state after reset/interrupt\n- [ ] Tests verify connection pool behaviour\n- [ ] Tests cover edge cases and error conditions\n\n**Priority**: P1 - Important for production robustness\n**Effort**: 2 days","status":"open","priority":1,"issue_type":"task","created_at":"2025-12-30T17:43:00.235086+11:00","created_by":"drew","updated_at":"2025-12-30T17:43:00.235086+11:00"} +{"id":"el-i0v","title":"Connection Reset and Interrupt Functional Tests","description":"Add comprehensive functional tests for connection reset and interrupt features.\n\n**Context**: reset_connection and interrupt_connection are implemented but only have basic tests (marked as ⚠️ in FEATURE_CHECKLIST.md).\n\n**Required Tests** (expand test/connection_features_test.exs or create new):\n\n**Reset Tests**:\n- Reset maintains database connection\n- Reset allows connection reuse in pool\n- Reset doesn't close active transactions\n- Reset clears temporary state\n- Reset multiple times in succession\n\n**Interrupt Tests**:\n- Interrupt cancels long-running query\n- Interrupt allows query restart after cancellation\n- Interrupt doesn't affect other connections\n- Interrupt during transaction behaviour\n- Concurrent interrupts on different connections\n\n**Files**:\n- EXPAND/NEW: test/connection_features_test.exs\n- Reference: FEATURE_CHECKLIST.md line 267-287\n- Reference: LIBSQL_FEATURE_COMPARISON.md section 3\n\n**Test Examples**:\n```elixir\ntest \"reset maintains database connection\" do\n {:ok, state} = connect()\n {:ok, state} = reset_connection(state)\n # Verify connection still works\n {:ok, _, _, _} = query(state, \"SELECT 1\")\nend\n\ntest \"interrupt cancels long-running query\" do\n {:ok, state} = connect()\n # Start long query in background\n task = Task.async(fn -\u003e query(state, \"SELECT sleep(10)\") end)\n # Interrupt after 100ms\n Process.sleep(100)\n interrupt_connection(state)\n # Verify query was cancelled\n assert {:error, _} = Task.await(task)\nend\n```\n\n**Acceptance Criteria**:\n- [ ] Reset functional tests comprehensive\n- [ ] Interrupt functional tests comprehensive\n- [ ] Tests verify connection state after reset/interrupt\n- [ ] Tests verify connection pool behaviour\n- [ ] Tests cover edge cases and error conditions\n\n**Priority**: P1 - Important for production robustness\n**Effort**: 2 days","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-30T17:43:00.235086+11:00","created_by":"drew","updated_at":"2025-12-31T10:36:04.379925+11:00","closed_at":"2025-12-31T10:36:04.379925+11:00","close_reason":"Closed"} {"id":"el-ik6","title":"Generated/Computed Columns","description":"Not supported in migrations. SQLite 3.31+ (2020), libSQL 3.45.1 fully supports GENERATED ALWAYS AS syntax with both STORED and virtual variants.\n\nDesired API:\n create table(:users) do\n add :first_name, :string\n add :last_name, :string\n add :full_name, :string, generated: \"first_name || ' ' || last_name\", stored: true\n end\n\nPRIORITY: Recommended as #4 in implementation order.\n\nEffort: 3-4 days.","status":"open","priority":2,"issue_type":"feature","created_at":"2025-12-30T17:35:51.391724+11:00","created_by":"drew","updated_at":"2025-12-30T17:43:18.271124+11:00"} {"id":"el-ndz","title":"UPSERT Support (INSERT ... ON CONFLICT)","description":"INSERT ... ON CONFLICT not implemented in ecto_libsql. SQLite 3.24+ (2018), libSQL 3.45.1 fully supports all conflict resolution modes: INSERT OR IGNORE, INSERT OR REPLACE, REPLACE, INSERT OR FAIL, INSERT OR ABORT, INSERT OR ROLLBACK.\n\nDesired API:\n Repo.insert(changeset, on_conflict: :replace_all, conflict_target: [:email])\n Repo.insert(changeset, on_conflict: {:replace, [:name, :updated_at]}, conflict_target: [:email])\n\nPRIORITY: Recommended as #2 in implementation order - common pattern, high value.\n\nEffort: 4-5 days.","status":"open","priority":1,"issue_type":"feature","created_at":"2025-12-30T17:35:51.230695+11:00","created_by":"drew","updated_at":"2025-12-30T17:43:18.142535+11:00"} {"id":"el-nqb","title":"Implement Named Parameters Support","description":"Add support for named parameters in queries (:name, @name, $name syntax).\n\n**Context**: LibSQL supports named parameters but ecto_libsql only supports positional (?). This is marked as high priority in FEATURE_CHECKLIST.md.\n\n**Current Limitation**:\n```elixir\n# Only positional parameters work:\nquery(\"INSERT INTO users VALUES (?, ?)\", [1, \"Alice\"])\n\n# Named parameters don't work:\nquery(\"INSERT INTO users (id, name) VALUES (:id, :name)\", %{id: 1, name: \"Alice\"})\n```\n\n**LibSQL Support**:\n- :name syntax (standard SQLite)\n- @name syntax (alternative)\n- $name syntax (PostgreSQL-like)\n\n**Benefits**:\n- Better developer experience\n- Self-documenting queries\n- Order-independent parameters\n- Matches PostgreSQL Ecto conventions\n\n**Implementation Required**:\n\n1. **Add parameter_name() NIF**:\n - Implement in native/ecto_libsql/src/statement.rs\n - Expose parameter_name(stmt_id, index) -\u003e {:ok, name} | {:error, reason}\n\n2. **Update query parameter handling**:\n - Accept map parameters: %{id: 1, name: \"Alice\"}\n - Convert named params to positional based on statement introspection\n - Maintain backwards compatibility with positional params\n\n3. **Update Ecto.Adapters.LibSql.Connection**:\n - Generate SQL with named parameters for better readability\n - Convert Ecto query bindings to named params\n\n**Files**:\n- native/ecto_libsql/src/statement.rs (add parameter_name NIF)\n- lib/ecto_libsql/native.ex (wrapper for parameter_name)\n- lib/ecto_libsql.ex (update parameter handling)\n- lib/ecto/adapters/libsql/connection.ex (generate named params)\n- test/statement_features_test.exs (tests marked :skip)\n\n**Existing Tests**:\nTests already exist but are marked :skip (mentioned in FEATURE_CHECKLIST.md line 1)\n\n**Acceptance Criteria**:\n- [ ] parameter_name() NIF implemented\n- [ ] Queries accept map parameters\n- [ ] All 3 syntaxes work (:name, @name, $name)\n- [ ] Backwards compatible with positional params\n- [ ] Unskip and pass existing tests\n- [ ] Add comprehensive named parameter tests\n\n**Examples**:\n```elixir\n# After implementation:\nRepo.query(\"INSERT INTO users (id, name) VALUES (:id, :name)\", %{id: 1, name: \"Alice\"})\nRepo.query(\"UPDATE users SET name = @name WHERE id = @id\", %{id: 1, name: \"Bob\"})\n```\n\n**References**:\n- FEATURE_CHECKLIST.md section \"High Priority (Should Implement)\" item 1\n- Test file with :skip markers\n\n**Priority**: P1 - High priority, improves developer experience\n**Effort**: 2-3 days","status":"open","priority":1,"issue_type":"feature","created_at":"2025-12-30T17:43:47.792238+11:00","created_by":"drew","updated_at":"2025-12-30T17:43:47.792238+11:00"} {"id":"el-o8r","title":"Partial Index Support in Migrations","description":"SQLite supports but Ecto DSL doesn't. Index only subset of rows, smaller/faster indexes, better for conditional uniqueness. Desired API: create index(:users, [:email], unique: true, where: \"deleted_at IS NULL\"). Effort: 2-3 days.","status":"open","priority":2,"issue_type":"feature","created_at":"2025-12-30T17:35:52.699216+11:00","created_by":"drew","updated_at":"2025-12-30T17:36:47.021007+11:00"} {"id":"el-qjf","title":"ANALYZE Statistics Collection","description":"Not exposed. Better query planning, automatic index selection, performance optimisation. Desired API: EctoLibSql.Native.analyze(state), EctoLibSql.Native.analyze_table(state, \"users\"), and config auto_analyze: true for post-migration. Effort: 2 days.","status":"open","priority":4,"issue_type":"feature","created_at":"2025-12-30T17:35:52.489236+11:00","created_by":"drew","updated_at":"2025-12-30T17:36:46.862645+11:00"} -{"id":"el-qvs","title":"Statement Introspection Edge Case Tests","description":"Expand statement introspection tests to cover edge cases and complex scenarios.\n\n**Context**: Statement introspection features (parameter_count, column_count, column_name) are implemented but only have basic happy-path tests (marked as ⚠️ in FEATURE_CHECKLIST.md).\n\n**Required Tests** (expand test/statement_features_test.exs):\n- Parameter count with 0 parameters\n- Parameter count with many parameters (\u003e10)\n- Parameter count with duplicate parameters\n- Column count for SELECT *\n- Column count for complex JOINs with aliases\n- Column count for aggregate functions\n- Column names with AS aliases\n- Column names for expressions and computed columns\n- Column names for all types (INTEGER, TEXT, BLOB, REAL)\n\n**Files**:\n- EXPAND: test/statement_features_test.exs (or create new file)\n- Reference: FEATURE_CHECKLIST.md line 245-264\n- Reference: LIBSQL_FEATURE_COMPARISON.md section 2\n\n**Test Examples**:\n```elixir\n# Edge case: No parameters\nstmt = prepare(\"SELECT * FROM users\")\nassert parameter_count(stmt) == 0\n\n# Edge case: Many parameters\nstmt = prepare(\"INSERT INTO users VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)\")\nassert parameter_count(stmt) == 10\n\n# Edge case: SELECT * column count\nstmt = prepare(\"SELECT * FROM users\")\nassert column_count(stmt) == actual_column_count\n\n# Edge case: Complex JOIN\nstmt = prepare(\"SELECT u.id, p.name AS profile_name FROM users u JOIN profiles p ON u.id = p.user_id\")\nassert column_name(stmt, 1) == \"profile_name\"\n```\n\n**Acceptance Criteria**:\n- [ ] All edge cases tested\n- [ ] Tests verify correct counts and names\n- [ ] Tests cover complex queries (JOINs, aggregates, expressions)\n- [ ] Tests validate column name aliases\n\n**Priority**: P1 - Important for tooling/debugging\n**Effort**: 1-2 days","status":"open","priority":1,"issue_type":"task","created_at":"2025-12-30T17:42:49.190861+11:00","created_by":"drew","updated_at":"2025-12-30T17:42:49.190861+11:00"} +{"id":"el-qvs","title":"Statement Introspection Edge Case Tests","description":"Expand statement introspection tests to cover edge cases and complex scenarios.\n\n**Context**: Statement introspection features (parameter_count, column_count, column_name) are implemented but only have basic happy-path tests (marked as ⚠️ in FEATURE_CHECKLIST.md).\n\n**Required Tests** (expand test/statement_features_test.exs):\n- Parameter count with 0 parameters\n- Parameter count with many parameters (\u003e10)\n- Parameter count with duplicate parameters\n- Column count for SELECT *\n- Column count for complex JOINs with aliases\n- Column count for aggregate functions\n- Column names with AS aliases\n- Column names for expressions and computed columns\n- Column names for all types (INTEGER, TEXT, BLOB, REAL)\n\n**Files**:\n- EXPAND: test/statement_features_test.exs (or create new file)\n- Reference: FEATURE_CHECKLIST.md line 245-264\n- Reference: LIBSQL_FEATURE_COMPARISON.md section 2\n\n**Test Examples**:\n```elixir\n# Edge case: No parameters\nstmt = prepare(\"SELECT * FROM users\")\nassert parameter_count(stmt) == 0\n\n# Edge case: Many parameters\nstmt = prepare(\"INSERT INTO users VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)\")\nassert parameter_count(stmt) == 10\n\n# Edge case: SELECT * column count\nstmt = prepare(\"SELECT * FROM users\")\nassert column_count(stmt) == actual_column_count\n\n# Edge case: Complex JOIN\nstmt = prepare(\"SELECT u.id, p.name AS profile_name FROM users u JOIN profiles p ON u.id = p.user_id\")\nassert column_name(stmt, 1) == \"profile_name\"\n```\n\n**Acceptance Criteria**:\n- [ ] All edge cases tested\n- [ ] Tests verify correct counts and names\n- [ ] Tests cover complex queries (JOINs, aggregates, expressions)\n- [ ] Tests validate column name aliases\n\n**Priority**: P1 - Important for tooling/debugging\n**Effort**: 1-2 days","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-30T17:42:49.190861+11:00","created_by":"drew","updated_at":"2025-12-31T10:33:24.47915+11:00","closed_at":"2025-12-31T10:33:24.47915+11:00","close_reason":"Closed"} {"id":"el-vnu","title":"Expression Indexes","description":"SQLite supports but awkward in Ecto. Index computed values, case-insensitive searches, JSON field indexing. Desired API: create index(:users, [], expression: \"LOWER(email)\", unique: true) or via fragment. Effort: 3 days.","status":"open","priority":3,"issue_type":"feature","created_at":"2025-12-30T17:35:52.893501+11:00","created_by":"drew","updated_at":"2025-12-30T17:36:47.184024+11:00"} {"id":"el-wee","title":"Window Functions Query Helpers","description":"libSQL 3.45.1 has full window function support: OVER, PARTITION BY, ORDER BY, frame specifications (ROWS BETWEEN, RANGE BETWEEN). Currently works via fragments but could benefit from dedicated query helpers.\n\nDesired API:\n from u in User,\n select: %{\n name: u.name,\n running_total: over(sum(u.amount), partition_by: u.category, order_by: u.date)\n }\n\nEffort: 4-5 days.","status":"open","priority":3,"issue_type":"feature","created_at":"2025-12-30T17:43:58.330639+11:00","created_by":"drew","updated_at":"2025-12-30T17:43:58.330639+11:00"} {"id":"el-xih","title":"RETURNING Enhancement for Batch Operations","description":"Works for single operations, not batches. libSQL 3.45.1 supports RETURNING clause on INSERT/UPDATE/DELETE.\n\nDesired API:\n {count, rows} = Repo.insert_all(User, users, returning: [:id, :inserted_at])\n # Returns all inserted rows with IDs\n\nPRIORITY: Recommended as #9 in implementation order.\n\nEffort: 3-4 days.","status":"open","priority":2,"issue_type":"feature","created_at":"2025-12-30T17:35:53.70112+11:00","created_by":"drew","updated_at":"2025-12-30T17:43:32.892591+11:00"} diff --git a/test/connection_features_test.exs b/test/connection_features_test.exs index febe252..0b6295b 100644 --- a/test/connection_features_test.exs +++ b/test/connection_features_test.exs @@ -105,6 +105,177 @@ defmodule EctoLibSql.ConnectionFeaturesTest do EctoLibSql.disconnect([], state) end + + test "reset maintains database connection", %{database: database} do + {:ok, state} = EctoLibSql.connect(database: database) + + # Create table and insert data + {:ok, _query, _result, state} = + EctoLibSql.handle_execute( + "CREATE TABLE reset_data (id INTEGER PRIMARY KEY, value TEXT)", + [], + [], + state + ) + + {:ok, _query, _result, state} = + EctoLibSql.handle_execute( + "INSERT INTO reset_data (value) VALUES (?)", + ["test"], + [], + state + ) + + # Reset connection + assert :ok = EctoLibSql.Native.reset(state) + + # Data should still be there (reset doesn't clear database) + {:ok, _query, result, _state} = + EctoLibSql.handle_execute( + "SELECT value FROM reset_data WHERE id = ?", + [1], + [], + state + ) + + assert result.rows == [["test"]] + + EctoLibSql.disconnect([], state) + end + + test "reset works with prepared statements", %{database: database} do + {:ok, state} = EctoLibSql.connect(database: database) + + {:ok, _query, _result, state} = + EctoLibSql.handle_execute( + "CREATE TABLE reset_stmts (id INTEGER PRIMARY KEY, value TEXT)", + [], + [], + state + ) + + # Prepare a statement + {:ok, stmt_id} = + EctoLibSql.Native.prepare(state, "INSERT INTO reset_stmts (value) VALUES (?)") + + # Execute once + {:ok, _} = + EctoLibSql.Native.execute_stmt( + state, + stmt_id, + "INSERT INTO reset_stmts (value) VALUES (?)", + ["test1"] + ) + + # Reset clears connection state but leaves prepared statement handle valid + assert :ok = EctoLibSql.Native.reset(state) + + # Statement should still work after reset + {:ok, _} = + EctoLibSql.Native.execute_stmt( + state, + stmt_id, + "INSERT INTO reset_stmts (value) VALUES (?)", + ["test2"] + ) + + # Close statement + EctoLibSql.Native.close_stmt(stmt_id) + + EctoLibSql.disconnect([], state) + end + + test "reset multiple times in succession works", %{database: database} do + {:ok, state} = EctoLibSql.connect(database: database) + + {:ok, _query, _result, state} = + EctoLibSql.handle_execute( + "CREATE TABLE reset_multi (id INTEGER PRIMARY KEY)", + [], + [], + state + ) + + # Reset multiple times + final_state = + Enum.reduce(1..5, state, fn _, acc_state -> + assert :ok = EctoLibSql.Native.reset(acc_state) + + # Each reset should work and connection should remain valid + {:ok, _query, result, new_state} = + EctoLibSql.handle_execute( + "SELECT COUNT(*) FROM reset_multi", + [], + [], + acc_state + ) + + assert result.rows == [[0]] + new_state + end) + + EctoLibSql.disconnect([], final_state) + end + + test "reset allows connection reuse in pooled scenario", %{database: database} do + # Simulate connection pool behaviour + connections = + Enum.map(1..3, fn _ -> + {:ok, state} = EctoLibSql.connect(database: database) + state + end) + + # Reset each connection + Enum.each(connections, fn state -> + assert :ok = EctoLibSql.Native.reset(state) + + # Each connection should work after reset + {:ok, _query, result, _state} = + EctoLibSql.handle_execute("SELECT 1", [], [], state) + + assert result.rows == [[1]] + + EctoLibSql.disconnect([], state) + end) + end + + test "reset leaves persistent data intact", %{database: database} do + {:ok, state} = EctoLibSql.connect(database: database) + + # Create regular table + {:ok, _query, _result, state} = + EctoLibSql.handle_execute( + "CREATE TABLE persist_reset (id INTEGER PRIMARY KEY, value TEXT)", + [], + [], + state + ) + + # Insert data + {:ok, _query, _result, state} = + EctoLibSql.handle_execute( + "INSERT INTO persist_reset VALUES (1, 'test')", + [], + [], + state + ) + + # Reset connection + assert :ok = EctoLibSql.Native.reset(state) + + # Data should still be there + {:ok, _query, result, _state} = + EctoLibSql.handle_execute( + "SELECT * FROM persist_reset", + [], + [], + state + ) + + assert result.rows == [[1, "test"]] + + EctoLibSql.disconnect([], state) + end end # ============================================================================ @@ -126,6 +297,151 @@ defmodule EctoLibSql.ConnectionFeaturesTest do EctoLibSql.disconnect([], state) end + + test "interrupt multiple times doesn't affect connection", %{database: database} do + {:ok, state} = EctoLibSql.connect(database: database) + + # Interrupt multiple times + for _ <- 1..5 do + assert :ok = EctoLibSql.Native.interrupt(state) + end + + # Connection should still work + {:ok, _query, result, _state} = + EctoLibSql.handle_execute("SELECT 1", [], [], state) + + assert result.rows == [[1]] + + EctoLibSql.disconnect([], state) + end + + test "interrupt doesn't affect other connections", %{database: database} do + # Create two connections + {:ok, state1} = EctoLibSql.connect(database: database) + {:ok, state2} = EctoLibSql.connect(database: database) + + # Interrupt first connection + assert :ok = EctoLibSql.Native.interrupt(state1) + + # Second connection should still work + {:ok, _query, result, _state2} = + EctoLibSql.handle_execute("SELECT 42", [], [], state2) + + assert result.rows == [[42]] + + # First connection should also still work + {:ok, _query, result, _state1} = + EctoLibSql.handle_execute("SELECT 1", [], [], state1) + + assert result.rows == [[1]] + + EctoLibSql.disconnect([], state1) + EctoLibSql.disconnect([], state2) + end + + test "interrupt allows query execution after", %{database: database} do + {:ok, state} = EctoLibSql.connect(database: database) + + {:ok, _query, _result, state} = + EctoLibSql.handle_execute( + "CREATE TABLE interrupt_test (id INTEGER PRIMARY KEY, value TEXT)", + [], + [], + state + ) + + # Interrupt + assert :ok = EctoLibSql.Native.interrupt(state) + + # Should still be able to execute queries + {:ok, _query, _result, state} = + EctoLibSql.handle_execute( + "INSERT INTO interrupt_test (value) VALUES (?)", + ["test"], + [], + state + ) + + # Verify insert worked + {:ok, _query, result, _state} = + EctoLibSql.handle_execute( + "SELECT value FROM interrupt_test WHERE id = ?", + [1], + [], + state + ) + + assert result.rows == [["test"]] + + EctoLibSql.disconnect([], state) + end + + test "interrupt during transaction behavior", %{database: database} do + {:ok, state} = EctoLibSql.connect(database: database) + + {:ok, _query, _result, state} = + EctoLibSql.handle_execute( + "CREATE TABLE interrupt_txn (id INTEGER PRIMARY KEY)", + [], + [], + state + ) + + # Begin transaction + {:ok, :begin, state} = EctoLibSql.handle_begin([], state) + + # Interrupt during transaction + assert :ok = EctoLibSql.Native.interrupt(state) + + # Transaction should be rollable + {:ok, _result, _state} = EctoLibSql.handle_rollback([], state) + + EctoLibSql.disconnect([], state) + end + + test "interrupt state persists across operations", %{database: database} do + {:ok, state} = EctoLibSql.connect(database: database) + + {:ok, _query, _result, state} = + EctoLibSql.handle_execute( + "CREATE TABLE interrupt_persist (id INTEGER PRIMARY KEY)", + [], + [], + state + ) + + # Interrupt, then do operations + assert :ok = EctoLibSql.Native.interrupt(state) + + {:ok, _query, _result, state} = + EctoLibSql.handle_execute( + "INSERT INTO interrupt_persist VALUES (1)", + [], + [], + state + ) + + {:ok, _query, _result, state} = + EctoLibSql.handle_execute( + "INSERT INTO interrupt_persist VALUES (2)", + [], + [], + state + ) + + # Verify both inserts worked + {:ok, _query, result, _state} = + EctoLibSql.handle_execute( + "SELECT COUNT(*) FROM interrupt_persist", + [], + [], + state + ) + + assert result.rows == [[2]] + + EctoLibSql.disconnect([], state) + end end # ============================================================================ diff --git a/test/replication_integration_test.exs b/test/replication_integration_test.exs new file mode 100644 index 0000000..8b16dab --- /dev/null +++ b/test/replication_integration_test.exs @@ -0,0 +1,492 @@ +defmodule EctoLibSql.ReplicationIntegrationTest do + @moduledoc """ + Comprehensive integration tests for replication features. + + Tests cover: + - Frame number tracking (get_frame_number_for_replica) + - Frame-specific synchronisation (sync_until_frame) + - Flush pending writes (flush_and_get_frame) + - Max write frame tracking (max_write_replication_index) + + These tests require either: + 1. A remote Turso database with TURSO_DB_URI and TURSO_AUTH_TOKEN env vars set + 2. A local replica database configured with remote sync + + For testing without remote, use @tag :skip + """ + use ExUnit.Case + + @turso_uri System.get_env("TURSO_DB_URI") + @turso_token System.get_env("TURSO_AUTH_TOKEN") + + # Skip tests if Turso credentials aren't provided + @moduletag skip: is_nil(@turso_uri) || is_nil(@turso_token) + + setup do + # For local testing, tests are skipped + # For Turso testing, create a database with replica mode + test_db = "z_ecto_libsql_test-replication_#{:erlang.unique_integer([:positive])}.db" + + {:ok, state} = + if not (is_nil(@turso_uri) or is_nil(@turso_token)) do + # Connect with replica mode for replication features + EctoLibSql.connect( + database: test_db, + uri: @turso_uri, + auth_token: @turso_token, + sync: true + ) + else + # Local-only fallback (tests will skip) + EctoLibSql.connect(database: test_db) + end + + # Create a test table + {:ok, _query, _result, state} = + EctoLibSql.handle_execute( + "CREATE TABLE test_data (id INTEGER PRIMARY KEY AUTOINCREMENT, value TEXT)", + [], + [], + state + ) + + on_exit(fn -> + EctoLibSql.disconnect([], state) + File.rm(test_db) + File.rm(test_db <> "-shm") + File.rm(test_db <> "-wal") + end) + + {:ok, state: state} + end + + # ============================================================================ + # Frame Number Tracking Tests + # ============================================================================ + + describe "get_frame_number_for_replica/1" do + test "returns current replication frame number", %{state: state} do + # Get initial frame number (should be 0 or positive for fresh database) + {:ok, frame_no} = EctoLibSql.Native.get_frame_number_for_replica(state) + + assert is_integer(frame_no) + assert frame_no >= 0 + end + + test "frame number increases after write operations", %{state: state} do + # Get initial frame + {:ok, initial_frame} = EctoLibSql.Native.get_frame_number_for_replica(state) + + # Insert a row + {:ok, _query, _result, state} = + EctoLibSql.handle_execute( + "INSERT INTO test_data (value) VALUES (?)", + ["test"], + [], + state + ) + + # Get frame after write (may or may not increase depending on sync settings) + {:ok, new_frame} = EctoLibSql.Native.get_frame_number_for_replica(state) + + # Frame number should be >= initial frame + assert new_frame >= initial_frame + end + + test "frame number is consistent across multiple calls", %{state: state} do + {:ok, frame1} = EctoLibSql.Native.get_frame_number_for_replica(state) + {:ok, frame2} = EctoLibSql.Native.get_frame_number_for_replica(state) + {:ok, frame3} = EctoLibSql.Native.get_frame_number_for_replica(state) + + # Without writes, frames should be identical + assert frame1 == frame2 + assert frame2 == frame3 + end + + test "handles state struct directly", %{state: state} do + # Both conn_id string and state struct should work + {:ok, _frame} = EctoLibSql.Native.get_frame_number_for_replica(state.conn_id) + {:ok, _frame} = EctoLibSql.Native.get_frame_number_for_replica(state) + + # Both should succeed (at least no error) + :ok + end + end + + # ============================================================================ + # Frame-Specific Synchronisation Tests + # ============================================================================ + + describe "sync_until_frame/2" do + test "synchronises replica to specific frame", %{state: state} do + # Get current frame + {:ok, current_frame} = EctoLibSql.Native.get_frame_number_for_replica(state) + + # Sync to current frame (should be a no-op but not error) + {:ok, _state} = EctoLibSql.Native.sync_until_frame(state, current_frame) + + # Frame should still match + {:ok, new_frame} = EctoLibSql.Native.get_frame_number_for_replica(state) + assert new_frame >= current_frame + end + + test "sync_until_frame with future frame", %{state: state} do + {:ok, current_frame} = EctoLibSql.Native.get_frame_number_for_replica(state) + + # Request sync to a future frame (may not exist yet) + # Should not error even if frame doesn't exist + result = EctoLibSql.Native.sync_until_frame(state, current_frame + 1000) + + # Either succeeds or returns a reasonable error + case result do + {:ok, _state} -> :ok + {:error, _reason} -> :ok + end + end + + test "handles state struct directly", %{state: state} do + {:ok, current_frame} = EctoLibSql.Native.get_frame_number_for_replica(state) + + # Both conn_id string and state struct should work + {:ok, _} = EctoLibSql.Native.sync_until_frame(state.conn_id, current_frame) + {:ok, _} = EctoLibSql.Native.sync_until_frame(state, current_frame) + + :ok + end + end + + # ============================================================================ + # Flush and Frame Number Tests + # ============================================================================ + + describe "flush_and_get_frame/1" do + test "flushes pending writes and returns frame number", %{state: state} do + # Get initial frame + {:ok, initial_frame} = EctoLibSql.Native.get_frame_number_for_replica(state) + + # Insert data + {:ok, _query, _result, state} = + EctoLibSql.handle_execute( + "INSERT INTO test_data (value) VALUES (?)", + ["flush_test"], + [], + state + ) + + # Flush and get frame + {:ok, flushed_frame} = EctoLibSql.Native.flush_and_get_frame(state) + + # Frame should be valid + assert is_integer(flushed_frame) + assert flushed_frame >= initial_frame + end + + test "multiple flushes work correctly", %{state: state} do + {:ok, frame1} = EctoLibSql.Native.flush_and_get_frame(state) + + # Insert data + {:ok, _query, _result, state} = + EctoLibSql.handle_execute( + "INSERT INTO test_data (value) VALUES (?)", + ["value1"], + [], + state + ) + + {:ok, frame2} = EctoLibSql.Native.flush_and_get_frame(state) + + # Insert more data + {:ok, _query, _result, state} = + EctoLibSql.handle_execute( + "INSERT INTO test_data (value) VALUES (?)", + ["value2"], + [], + state + ) + + {:ok, frame3} = EctoLibSql.Native.flush_and_get_frame(state) + + # Frames should be non-decreasing + assert frame2 >= frame1 + assert frame3 >= frame2 + end + + test "flush without writes still returns frame", %{state: state} do + # No writes, just flush + {:ok, frame} = EctoLibSql.Native.flush_and_get_frame(state) + + assert is_integer(frame) + assert frame >= 0 + end + end + + # ============================================================================ + # Max Write Replication Index Tests + # ============================================================================ + + describe "max_write_replication_index/1" do + test "returns highest replication frame from writes", %{state: state} do + # Get initial max write frame + {:ok, initial_max} = EctoLibSql.Native.max_write_replication_index(state) + + assert is_integer(initial_max) + assert initial_max >= 0 + end + + test "max write frame increases after writes", %{state: state} do + {:ok, initial_max} = EctoLibSql.Native.max_write_replication_index(state) + + # Insert data + {:ok, _query, _result, state} = + EctoLibSql.handle_execute( + "INSERT INTO test_data (value) VALUES (?)", + ["test"], + [], + state + ) + + {:ok, new_max} = EctoLibSql.Native.max_write_replication_index(state) + + # Max write frame should increase or stay same + assert new_max >= initial_max + end + + test "max write frame with multiple operations", %{state: state} do + {:ok, frame_before} = EctoLibSql.Native.max_write_replication_index(state) + + # Multiple writes + final_state = + Enum.reduce(1..5, state, fn i, acc_state -> + {:ok, _query, _result, new_state} = + EctoLibSql.handle_execute( + "INSERT INTO test_data (value) VALUES (?)", + ["value#{i}"], + [], + acc_state + ) + + new_state + end) + + {:ok, frame_after} = EctoLibSql.Native.max_write_replication_index(final_state) + + assert frame_after >= frame_before + end + + test "handles state struct directly", %{state: state} do + # Both conn_id string and state struct should work + {:ok, _frame} = EctoLibSql.Native.max_write_replication_index(state.conn_id) + {:ok, _frame} = EctoLibSql.Native.max_write_replication_index(state) + + :ok + end + + test "read-only operations don't affect max write frame", %{state: state} do + # Insert data first + {:ok, _query, _result, state} = + EctoLibSql.handle_execute( + "INSERT INTO test_data (value) VALUES (?)", + ["initial"], + [], + state + ) + + {:ok, max_frame_after_write} = EctoLibSql.Native.max_write_replication_index(state) + + # Read data multiple times + state = + Enum.reduce(1..5, state, fn _, acc_state -> + {:ok, _query, _result, new_state} = + EctoLibSql.handle_execute( + "SELECT * FROM test_data", + [], + [], + acc_state + ) + + new_state + end) + + {:ok, max_frame_after_reads} = EctoLibSql.Native.max_write_replication_index(state) + + # Max write frame should be unchanged after reads + assert max_frame_after_reads == max_frame_after_write + end + end + + # ============================================================================ + # Integration Scenarios + # ============================================================================ + + describe "Replication scenarios" do + test "monitoring replication lag via frame numbers", %{state: state} do + # Simulate monitoring replication lag + {:ok, frame1} = EctoLibSql.Native.get_frame_number_for_replica(state) + + # Insert some data + {:ok, _query, _result, state} = + EctoLibSql.handle_execute( + "INSERT INTO test_data (value) VALUES (?)", + ["lag_test"], + [], + state + ) + + {:ok, frame2} = EctoLibSql.Native.get_frame_number_for_replica(state) + + # In a real scenario with remote replica, frame2 might lag behind + # the primary. For local test, they're in sync. + assert frame2 >= frame1 + end + + test "tracking write operations across frame boundaries", %{state: state} do + # Track writes for read-your-writes consistency + {_final_state, max_frames} = + Enum.reduce(1..3, {state, []}, fn i, {acc_state, acc_frames} -> + {:ok, _query, _result, new_state} = + EctoLibSql.handle_execute( + "INSERT INTO test_data (value) VALUES (?)", + ["operation#{i}"], + [], + acc_state + ) + + {:ok, max_write_frame} = EctoLibSql.Native.max_write_replication_index(new_state) + {new_state, [max_write_frame | acc_frames]} + end) + + # Max frames should be non-decreasing + max_frames = Enum.reverse(max_frames) + + assert length(max_frames) == 3 + + [frame1, frame2, frame3] = max_frames + assert frame2 >= frame1 + assert frame3 >= frame2 + end + + test "batch operations with frame tracking", %{state: state} do + {:ok, initial_frame} = EctoLibSql.Native.get_frame_number_for_replica(state) + + # Prepare batch statements + statements = [ + {"INSERT INTO test_data (value) VALUES (?)", ["batch1"]}, + {"INSERT INTO test_data (value) VALUES (?)", ["batch2"]}, + {"INSERT INTO test_data (value) VALUES (?)", ["batch3"]} + ] + + {:ok, _results} = EctoLibSql.Native.batch_transactional(state, statements) + + {:ok, final_frame} = EctoLibSql.Native.get_frame_number_for_replica(state) + + # Frame should have advanced or stayed same + assert final_frame >= initial_frame + end + + test "transaction with frame number verification", %{state: state} do + {:ok, frame_before} = EctoLibSql.Native.get_frame_number_for_replica(state) + + # Begin transaction + {:ok, :begin, state} = EctoLibSql.handle_begin([], state) + + # Insert within transaction + {:ok, _query, _result, state} = + EctoLibSql.handle_execute( + "INSERT INTO test_data (value) VALUES (?)", + ["txn_data"], + [], + state + ) + + # Commit + {:ok, _result, state} = EctoLibSql.handle_commit([], state) + + {:ok, frame_after} = EctoLibSql.Native.get_frame_number_for_replica(state) + + # Frame should advance + assert frame_after >= frame_before + end + + test "flush before sync ensures data consistency", %{state: state} do + {:ok, initial_frame} = EctoLibSql.Native.get_frame_number_for_replica(state) + + # Insert data + {:ok, _query, _result, state} = + EctoLibSql.handle_execute( + "INSERT INTO test_data (value) VALUES (?)", + ["pre_flush"], + [], + state + ) + + # Flush pending writes + {:ok, flushed_frame} = EctoLibSql.Native.flush_and_get_frame(state) + + assert flushed_frame >= initial_frame + + # Now sync to that frame + {:ok, _state} = EctoLibSql.Native.sync_until_frame(state, flushed_frame) + + # Verify we can read the data + {:ok, _query, result, _state} = + EctoLibSql.handle_execute( + "SELECT * FROM test_data WHERE value = ?", + ["pre_flush"], + [], + state + ) + + assert result.num_rows >= 1 + end + end + + # ============================================================================ + # Edge Cases and Error Handling + # ============================================================================ + + describe "Edge cases" do + test "get_frame_number_for_replica with large frame numbers", %{state: state} do + # Insert many rows to potentially get large frame numbers + final_state = + Enum.reduce(1..100, state, fn i, acc_state -> + {:ok, _query, _result, new_state} = + EctoLibSql.handle_execute( + "INSERT INTO test_data (value) VALUES (?)", + ["row#{i}"], + [], + acc_state + ) + + new_state + end) + + {:ok, frame} = EctoLibSql.Native.get_frame_number_for_replica(final_state) + + # Should handle large numbers without overflow + assert is_integer(frame) + assert frame >= 0 + end + + test "sync_until_frame with frame 0", %{state: state} do + # Syncing to frame 0 should work + {:ok, _state} = EctoLibSql.Native.sync_until_frame(state, 0) + :ok + end + + test "flush_and_get_frame returns valid integer", %{state: state} do + {:ok, frame} = EctoLibSql.Native.flush_and_get_frame(state) + + assert is_integer(frame) + assert frame >= 0 + end + + test "replication functions work without remote connection", %{state: state} do + # All these should work even with local database + {:ok, _} = EctoLibSql.Native.get_frame_number_for_replica(state) + {:ok, _} = EctoLibSql.Native.flush_and_get_frame(state) + {:ok, _} = EctoLibSql.Native.max_write_replication_index(state) + + :ok + end + end +end diff --git a/test/statement_features_test.exs b/test/statement_features_test.exs index 048ccb7..36dbd4b 100644 --- a/test/statement_features_test.exs +++ b/test/statement_features_test.exs @@ -687,7 +687,137 @@ defmodule EctoLibSql.StatementFeaturesTest do EctoLibSql.Native.close_stmt(stmt_id) end - end + + test "column metadata for all data types (INTEGER, TEXT, BLOB, REAL)", %{state: state} do + # Create table with all major data types + {:ok, _query, _result, state} = + EctoLibSql.handle_execute( + """ + CREATE TABLE data_types ( + id INTEGER PRIMARY KEY, + text_col TEXT, + blob_col BLOB, + real_col REAL, + numeric_col NUMERIC + ) + """, + [], + [], + state + ) + + {:ok, stmt_id} = EctoLibSql.Native.prepare(state, "SELECT * FROM data_types") + + assert {:ok, 5} = EctoLibSql.Native.stmt_column_count(state, stmt_id) + + # Get full metadata including types + {:ok, columns} = EctoLibSql.Native.get_stmt_columns(state, stmt_id) + + assert length(columns) == 5 + + # Verify column types + [ + {id_name, _, id_type}, + {text_name, _, text_type}, + {blob_name, _, blob_type}, + {real_name, _, real_type}, + {numeric_name, _, numeric_type} + ] = columns + + assert id_name == "id" + assert id_type == "INTEGER" + + assert text_name == "text_col" + assert text_type == "TEXT" + + assert blob_name == "blob_col" + assert blob_type == "BLOB" + + assert real_name == "real_col" + assert real_type == "REAL" + + assert numeric_name == "numeric_col" + assert numeric_type == "NUMERIC" + + EctoLibSql.Native.close_stmt(stmt_id) + end + + test "column names for SELECT with implicit type conversion", %{state: state} do + # Test column introspection with type casting + {:ok, stmt_id} = + EctoLibSql.Native.prepare( + state, + """ + SELECT + CAST(id AS TEXT) as id_text, + CAST(name AS BLOB) as name_blob, + CAST(age AS REAL) as age_real + FROM users + """ + ) + + assert {:ok, 3} = EctoLibSql.Native.stmt_column_count(state, stmt_id) + + names = get_column_names(state, stmt_id, 3) + assert names == ["id_text", "name_blob", "age_real"] + + EctoLibSql.Native.close_stmt(stmt_id) + end + + test "column count for UNION queries", %{state: state} do + # Create another table for UNION test + {:ok, _query, _result, state} = + EctoLibSql.handle_execute( + """ + CREATE TABLE users_backup (id INTEGER PRIMARY KEY, name TEXT, age INTEGER) + """, + [], + [], + state + ) + + {:ok, stmt_id} = + EctoLibSql.Native.prepare( + state, + """ + SELECT id, name, age FROM users + UNION + SELECT id, name, age FROM users_backup + """ + ) + + assert {:ok, 3} = EctoLibSql.Native.stmt_column_count(state, stmt_id) + + names = get_column_names(state, stmt_id, 3) + assert names == ["id", "name", "age"] + + EctoLibSql.Native.close_stmt(stmt_id) + end + + test "column count for CASE expressions", %{state: state} do + {:ok, stmt_id} = + EctoLibSql.Native.prepare( + state, + """ + SELECT + id, + CASE + WHEN age < 18 THEN 'minor' + WHEN age >= 65 THEN 'senior' + ELSE 'adult' + END as age_group + FROM users + """ + ) + + assert {:ok, 2} = EctoLibSql.Native.stmt_column_count(state, stmt_id) + + names = get_column_names(state, stmt_id, 2) + assert names == ["id", "age_group"] + + EctoLibSql.Native.close_stmt(stmt_id) + end + end # ============================================================================ # Helper Functions From 5e940ffd30f3f2f3ff3eb260772912adfa8a0b7f Mon Sep 17 00:00:00 2001 From: Drew Robinson Date: Wed, 31 Dec 2025 10:37:00 +1100 Subject: [PATCH 2/4] Update beads task status: close 4 tasks (3 completed, 1 already implemented) --- .beads/issues.jsonl | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.beads/issues.jsonl b/.beads/issues.jsonl index 2168db1..24e2cab 100644 --- a/.beads/issues.jsonl +++ b/.beads/issues.jsonl @@ -11,7 +11,7 @@ {"id":"el-7t8","title":"Full-Text Search (FTS5) Schema Integration","description":"Partial - Extension loading works, but no schema helpers. libSQL 3.45.1 has comprehensive FTS5 extension with advanced features: phrase queries, term expansion, ranking, tokenisation, custom tokenisers.\n\nDesired API:\n create table(:posts, fts5: true) do\n add :title, :text, fts_weight: 10\n add :body, :text\n add :author, :string, fts_indexed: false\n end\n\n from p in Post, where: fragment(\"posts MATCH ?\", \"search terms\"), order_by: [desc: fragment(\"rank\")]\n\nPRIORITY: Recommended as #7 in implementation order - major feature.\n\nEffort: 5-7 days.","status":"open","priority":2,"issue_type":"feature","created_at":"2025-12-30T17:35:51.738732+11:00","created_by":"drew","updated_at":"2025-12-30T17:43:18.522669+11:00"} {"id":"el-a17","title":"JSONB Binary Format Support","description":"New in libSQL 3.45. Binary encoding of JSON for faster processing. 5-10% smaller than text JSON. Backwards compatible with text JSON - automatically converted between formats. All JSON functions work with both text and JSONB.\n\nCould provide performance benefits for JSON-heavy applications. May require new Ecto type or option.\n\nEffort: 2-3 days.","status":"open","priority":3,"issue_type":"feature","created_at":"2025-12-30T17:43:58.200973+11:00","created_by":"drew","updated_at":"2025-12-30T17:43:58.200973+11:00"} {"id":"el-aob","title":"Implement True Streaming Cursors","description":"Refactor cursor implementation to use true streaming instead of loading all rows into memory.\n\n**Problem**: Current cursor implementation loads ALL rows into memory upfront (lib.rs:1074-1100), then paginates through the buffer. This causes high memory usage for large datasets.\n\n**Current (Memory Issue)**:\n```rust\n// MEMORY ISSUE (lib.rs:1074-1100):\nlet rows = query_result.into_iter().collect::\u003cVec\u003c_\u003e\u003e(); // ← Loads everything!\n```\n\n**Impact**:\n- ✅ Works fine for small/medium datasets (\u003c 100K rows)\n- ⚠️ High memory usage for large datasets (\u003e 1M rows)\n- ❌ Cannot stream truly large datasets (\u003e 10M rows)\n\n**Example**:\n```elixir\n# Current: Loads 1 million rows into RAM\ncursor = Repo.stream(large_query)\nEnum.take(cursor, 100) # Only want 100, but loaded 1M!\n\n# Desired: True streaming, loads on-demand\ncursor = Repo.stream(large_query)\nEnum.take(cursor, 100) # Only loads 100 rows\n```\n\n**Fix Required**:\n1. Refactor to use libsql Rows async iterator\n2. Stream batches on-demand instead of loading all upfront\n3. Store iterator state in cursor registry\n4. Fetch next batch when cursor is fetched\n5. Update CursorData structure to support streaming\n\n**Files**:\n- native/ecto_libsql/src/cursor.rs (major refactor)\n- native/ecto_libsql/src/models.rs (update CursorData struct)\n- test/ecto_integration_test.exs (add streaming tests)\n- NEW: test/performance_test.exs (memory usage benchmarks)\n\n**Acceptance Criteria**:\n- [ ] Cursors stream batches on-demand\n- [ ] Memory usage stays constant regardless of result size\n- [ ] Can stream 10M+ rows without OOM\n- [ ] Performance: Streaming vs loading all benchmarked\n- [ ] All existing cursor tests pass\n- [ ] New tests verify streaming behaviour\n\n**Test Requirements**:\n```elixir\ntest \"cursor streams 1M rows without loading all into memory\" do\n # Insert 1M rows\n # Declare cursor\n # Verify memory usage \u003c 100MB while streaming\n # Verify all rows eventually fetched\nend\n```\n\n**References**:\n- LIBSQL_FEATURE_MATRIX_FINAL.md section 9\n- FEATURE_CHECKLIST.md Cursor Methods\n\n**Priority**: P1 - Critical for large dataset processing\n**Effort**: 4-5 days (major refactor)","status":"open","priority":1,"issue_type":"feature","created_at":"2025-12-30T17:43:30.692425+11:00","created_by":"drew","updated_at":"2025-12-30T17:43:30.692425+11:00"} -{"id":"el-djv","title":"Implement max_write_replication_index() NIF","description":"Add max_write_replication_index() NIF to track maximum write frame for replication monitoring.\n\n**Context**: The libsql API provides max_write_replication_index() for tracking the highest frame number that has been written. This is useful for monitoring replication lag and coordinating replica sync.\n\n**Current Status**: \n- ⚠️ LibSQL 0.9.29 provides the API\n- ⚠️ Not yet wrapped in ecto_libsql\n- Identified in LIBSQL_FEATURE_MATRIX_FINAL.md section 5\n\n**Use Case**:\n```elixir\n# Primary writes data\n{:ok, _} = Repo.query(\"INSERT INTO users (name) VALUES ('Alice')\")\n\n# Track max write frame on primary\n{:ok, max_write_frame} = EctoLibSql.Native.max_write_replication_index(primary_state)\n\n# Sync replica to that frame\n:ok = EctoLibSql.Native.sync_until(replica_state, max_write_frame)\n\n# Now replica is caught up to primary's writes\n```\n\n**Benefits**:\n- Monitor replication lag accurately\n- Coordinate multi-replica sync\n- Ensure read-after-write consistency\n- Track write progress for analytics\n\n**Implementation Required**:\n\n1. **Add NIF** (native/ecto_libsql/src/replication.rs):\n ```rust\n /// Get the maximum replication index that has been written.\n ///\n /// # Returns\n /// - {:ok, frame_number} - Success\n /// - {:error, reason} - Failure\n #[rustler::nif(schedule = \"DirtyIo\")]\n pub fn max_write_replication_index(conn_id: \u0026str) -\u003e NifResult\u003cu64\u003e {\n let conn_map = safe_lock(\u0026CONNECTION_REGISTRY, \"max_write_replication_index\")?;\n let conn_arc = conn_map\n .get(conn_id)\n .ok_or_else(|| rustler::Error::Term(Box::new(\"Connection not found\")))?\n .clone();\n drop(conn_map);\n\n let result = TOKIO_RUNTIME.block_on(async {\n let conn_guard = safe_lock_arc(\u0026conn_arc, \"max_write_replication_index conn\")\n .map_err(|e| format!(\"{:?}\", e))?;\n \n conn_guard\n .db\n .max_write_replication_index()\n .await\n .map_err(|e| format!(\"Failed to get max write replication index: {:?}\", e))\n })?;\n\n Ok(result)\n }\n ```\n\n2. **Add Elixir wrapper** (lib/ecto_libsql/native.ex):\n ```elixir\n @doc \"\"\"\n Get the maximum replication index that has been written.\n \n Returns the highest frame number that has been written to the database.\n Useful for tracking write progress and coordinating replica sync.\n \n ## Examples\n \n {:ok, max_frame} = EctoLibSql.Native.max_write_replication_index(state)\n :ok = EctoLibSql.Native.sync_until(replica_state, max_frame)\n \"\"\"\n def max_write_replication_index(_conn_id), do: :erlang.nif_error(:nif_not_loaded)\n \n def max_write_replication_index_safe(%EctoLibSql.State{conn_id: conn_id}) do\n case max_write_replication_index(conn_id) do\n {:ok, frame} -\u003e {:ok, frame}\n {:error, reason} -\u003e {:error, reason}\n end\n end\n ```\n\n3. **Add tests** (test/replication_integration_test.exs):\n ```elixir\n test \"max_write_replication_index tracks writes\" do\n {:ok, state} = connect()\n \n # Initial max write frame\n {:ok, initial_frame} = EctoLibSql.Native.max_write_replication_index(state)\n \n # Perform write\n {:ok, _, _, state} = EctoLibSql.handle_execute(\n \"INSERT INTO test (data) VALUES (?)\",\n [\"test\"], [], state\n )\n \n # Max write frame should increase\n {:ok, new_frame} = EctoLibSql.Native.max_write_replication_index(state)\n assert new_frame \u003e initial_frame\n end\n ```\n\n**Files**:\n- native/ecto_libsql/src/replication.rs (add NIF)\n- lib/ecto_libsql/native.ex (add wrapper)\n- test/replication_integration_test.exs (add tests)\n- AGENTS.md (update API docs)\n\n**Acceptance Criteria**:\n- [ ] max_write_replication_index() NIF implemented\n- [ ] Safe wrapper in Native module\n- [ ] Tests verify frame number increases on writes\n- [ ] Tests verify frame number coordination\n- [ ] Documentation updated\n- [ ] API added to AGENTS.md\n\n**Dependencies**:\n- Related to el-g5l (Replication Integration Tests)\n- Should be tested together\n\n**References**:\n- LIBSQL_FEATURE_MATRIX_FINAL.md section 5 (line 167)\n- libsql API: db.max_write_replication_index()\n\n**Priority**: P1 - Important for replication monitoring\n**Effort**: 0.5-1 day (straightforward NIF addition)","status":"open","priority":1,"issue_type":"task","created_at":"2025-12-30T17:45:41.941413+11:00","created_by":"drew","updated_at":"2025-12-30T17:45:41.941413+11:00"} +{"id":"el-djv","title":"Implement max_write_replication_index() NIF","description":"Add max_write_replication_index() NIF to track maximum write frame for replication monitoring.\n\n**Context**: The libsql API provides max_write_replication_index() for tracking the highest frame number that has been written. This is useful for monitoring replication lag and coordinating replica sync.\n\n**Current Status**: \n- ⚠️ LibSQL 0.9.29 provides the API\n- ⚠️ Not yet wrapped in ecto_libsql\n- Identified in LIBSQL_FEATURE_MATRIX_FINAL.md section 5\n\n**Use Case**:\n```elixir\n# Primary writes data\n{:ok, _} = Repo.query(\"INSERT INTO users (name) VALUES ('Alice')\")\n\n# Track max write frame on primary\n{:ok, max_write_frame} = EctoLibSql.Native.max_write_replication_index(primary_state)\n\n# Sync replica to that frame\n:ok = EctoLibSql.Native.sync_until(replica_state, max_write_frame)\n\n# Now replica is caught up to primary's writes\n```\n\n**Benefits**:\n- Monitor replication lag accurately\n- Coordinate multi-replica sync\n- Ensure read-after-write consistency\n- Track write progress for analytics\n\n**Implementation Required**:\n\n1. **Add NIF** (native/ecto_libsql/src/replication.rs):\n ```rust\n /// Get the maximum replication index that has been written.\n ///\n /// # Returns\n /// - {:ok, frame_number} - Success\n /// - {:error, reason} - Failure\n #[rustler::nif(schedule = \"DirtyIo\")]\n pub fn max_write_replication_index(conn_id: \u0026str) -\u003e NifResult\u003cu64\u003e {\n let conn_map = safe_lock(\u0026CONNECTION_REGISTRY, \"max_write_replication_index\")?;\n let conn_arc = conn_map\n .get(conn_id)\n .ok_or_else(|| rustler::Error::Term(Box::new(\"Connection not found\")))?\n .clone();\n drop(conn_map);\n\n let result = TOKIO_RUNTIME.block_on(async {\n let conn_guard = safe_lock_arc(\u0026conn_arc, \"max_write_replication_index conn\")\n .map_err(|e| format!(\"{:?}\", e))?;\n \n conn_guard\n .db\n .max_write_replication_index()\n .await\n .map_err(|e| format!(\"Failed to get max write replication index: {:?}\", e))\n })?;\n\n Ok(result)\n }\n ```\n\n2. **Add Elixir wrapper** (lib/ecto_libsql/native.ex):\n ```elixir\n @doc \"\"\"\n Get the maximum replication index that has been written.\n \n Returns the highest frame number that has been written to the database.\n Useful for tracking write progress and coordinating replica sync.\n \n ## Examples\n \n {:ok, max_frame} = EctoLibSql.Native.max_write_replication_index(state)\n :ok = EctoLibSql.Native.sync_until(replica_state, max_frame)\n \"\"\"\n def max_write_replication_index(_conn_id), do: :erlang.nif_error(:nif_not_loaded)\n \n def max_write_replication_index_safe(%EctoLibSql.State{conn_id: conn_id}) do\n case max_write_replication_index(conn_id) do\n {:ok, frame} -\u003e {:ok, frame}\n {:error, reason} -\u003e {:error, reason}\n end\n end\n ```\n\n3. **Add tests** (test/replication_integration_test.exs):\n ```elixir\n test \"max_write_replication_index tracks writes\" do\n {:ok, state} = connect()\n \n # Initial max write frame\n {:ok, initial_frame} = EctoLibSql.Native.max_write_replication_index(state)\n \n # Perform write\n {:ok, _, _, state} = EctoLibSql.handle_execute(\n \"INSERT INTO test (data) VALUES (?)\",\n [\"test\"], [], state\n )\n \n # Max write frame should increase\n {:ok, new_frame} = EctoLibSql.Native.max_write_replication_index(state)\n assert new_frame \u003e initial_frame\n end\n ```\n\n**Files**:\n- native/ecto_libsql/src/replication.rs (add NIF)\n- lib/ecto_libsql/native.ex (add wrapper)\n- test/replication_integration_test.exs (add tests)\n- AGENTS.md (update API docs)\n\n**Acceptance Criteria**:\n- [ ] max_write_replication_index() NIF implemented\n- [ ] Safe wrapper in Native module\n- [ ] Tests verify frame number increases on writes\n- [ ] Tests verify frame number coordination\n- [ ] Documentation updated\n- [ ] API added to AGENTS.md\n\n**Dependencies**:\n- Related to el-g5l (Replication Integration Tests)\n- Should be tested together\n\n**References**:\n- LIBSQL_FEATURE_MATRIX_FINAL.md section 5 (line 167)\n- libsql API: db.max_write_replication_index()\n\n**Priority**: P1 - Important for replication monitoring\n**Effort**: 0.5-1 day (straightforward NIF addition)","status":"closed","priority":1,"issue_type":"task","created_at":"2025-12-30T17:45:41.941413+11:00","created_by":"drew","updated_at":"2025-12-31T10:36:43.881304+11:00","closed_at":"2025-12-31T10:36:43.881304+11:00","close_reason":"max_write_replication_index NIF already implemented in native/ecto_libsql/src/replication.rs and wrapped in lib/ecto_libsql/native.ex"} {"id":"el-e42","title":"Add Performance Benchmark Tests","description":"Create comprehensive performance benchmarks to track ecto_libsql performance and identify bottlenecks.\n\n**Context**: No performance benchmarks exist. Need to establish baselines and track performance across versions. Critical for validating performance improvements (like statement reset fix).\n\n**Benchmark Categories**:\n\n**1. Prepared Statement Performance**:\n```elixir\n# Measure impact of statement re-preparation bug\nbenchmark \"prepared statement execution\" do\n stmt = prepare(\"INSERT INTO bench VALUES (?, ?)\")\n \n # Before fix: ~30-50% slower\n # After fix: baseline\n Benchee.run(%{\n \"100 executions\" =\u003e fn -\u003e \n for i \u003c- 1..100, do: execute(stmt, [i, \"data\"])\n end\n })\nend\n```\n\n**2. Cursor Streaming Memory**:\n```elixir\nbenchmark \"cursor memory usage\" do\n # Current: Loads all into memory\n # After streaming fix: Constant memory\n \n cursor = declare_cursor(\"SELECT * FROM large_table\")\n \n :erlang.garbage_collect()\n {memory_before, _} = :erlang.process_info(self(), :memory)\n \n Enum.take(cursor, 100)\n \n {memory_after, _} = :erlang.process_info(self(), :memory)\n memory_used = memory_after - memory_before\n \n # Assert memory \u003c 10MB for 1M row table\n assert memory_used \u003c 10_000_000\nend\n```\n\n**3. Concurrent Connections**:\n```elixir\nbenchmark \"concurrent connections\" do\n Benchee.run(%{\n \"10 connections\" =\u003e fn -\u003e parallel_queries(10) end,\n \"50 connections\" =\u003e fn -\u003e parallel_queries(50) end,\n \"100 connections\" =\u003e fn -\u003e parallel_queries(100) end,\n })\nend\n```\n\n**4. Transaction Throughput**:\n```elixir\nbenchmark \"transaction throughput\" do\n Benchee.run(%{\n \"1000 transactions/sec\" =\u003e fn -\u003e\n for i \u003c- 1..1000 do\n Repo.transaction(fn -\u003e\n Repo.query(\"INSERT INTO bench VALUES (?)\", [i])\n end)\n end\n end\n })\nend\n```\n\n**5. Batch Operations**:\n```elixir\nbenchmark \"batch operations\" do\n queries = for i \u003c- 1..1000, do: \"INSERT INTO bench VALUES (\\#{i})\"\n \n Benchee.run(%{\n \"manual batch\" =\u003e fn -\u003e execute_batch(queries) end,\n \"native batch\" =\u003e fn -\u003e execute_batch_native(queries) end,\n \"transactional batch\" =\u003e fn -\u003e execute_transactional_batch(queries) end,\n })\nend\n```\n\n**6. Statement Cache Performance**:\n```elixir\nbenchmark \"statement cache\" do\n Benchee.run(%{\n \"1000 unique statements\" =\u003e fn -\u003e\n for i \u003c- 1..1000 do\n prepare(\"SELECT * FROM bench WHERE id = \\#{i}\")\n end\n end\n })\nend\n```\n\n**7. Replication Sync Performance**:\n```elixir\nbenchmark \"replica sync\" do\n # Write to primary\n for i \u003c- 1..10000, do: insert_on_primary(i)\n \n # Measure sync time\n Benchee.run(%{\n \"sync 10K changes\" =\u003e fn -\u003e \n sync(replica)\n end\n })\nend\n```\n\n**Implementation**:\n\n1. **Add benchee dependency** (mix.exs):\n ```elixir\n {:benchee, \"~\u003e 1.3\", only: :dev}\n {:benchee_html, \"~\u003e 1.0\", only: :dev}\n ```\n\n2. **Create benchmark files**:\n - benchmarks/prepared_statements_bench.exs\n - benchmarks/cursor_streaming_bench.exs\n - benchmarks/concurrent_connections_bench.exs\n - benchmarks/transactions_bench.exs\n - benchmarks/batch_operations_bench.exs\n - benchmarks/statement_cache_bench.exs\n - benchmarks/replication_bench.exs\n\n3. **Add benchmark runner** (mix.exs):\n ```elixir\n def cli do\n [\n aliases: [\n bench: \"run benchmarks/**/*_bench.exs\"\n ]\n ]\n end\n ```\n\n4. **CI Integration**:\n - Run benchmarks on PRs\n - Track performance over time\n - Alert on regression \u003e 20%\n\n**Baseline Targets** (to establish):\n- Prepared statement execution: X ops/sec\n- Cursor streaming: Y MB memory for Z rows\n- Transaction throughput: 1000+ txn/sec\n- Concurrent connections: 100 connections\n- Batch operations: Native 20-30% faster than manual\n\n**Files**:\n- mix.exs (add benchee dependency)\n- benchmarks/*.exs (benchmark files)\n- .github/workflows/benchmarks.yml (CI integration)\n- PERFORMANCE.md (document baselines and results)\n\n**Acceptance Criteria**:\n- [ ] Benchee dependency added\n- [ ] 7 benchmark categories implemented\n- [ ] Benchmarks run via mix bench\n- [ ] HTML reports generated\n- [ ] Baselines documented in PERFORMANCE.md\n- [ ] CI runs benchmarks on PRs\n- [ ] Regression alerts configured\n\n**Test Requirements**:\n```bash\n# Run all benchmarks\nmix bench\n\n# Run specific benchmark\nmix run benchmarks/prepared_statements_bench.exs\n\n# Generate HTML report\nmix run benchmarks/prepared_statements_bench.exs --format html\n```\n\n**Benefits**:\n- Track performance across versions\n- Validate performance improvements\n- Identify bottlenecks\n- Catch regressions early\n- Document performance characteristics\n\n**References**:\n- FEATURE_CHECKLIST.md section \"Test Coverage Priorities\" item 6\n- LIBSQL_FEATURE_COMPARISON.md section \"Performance and Stress Tests\"\n\n**Dependencies**:\n- Validates fixes for el-2ry (statement performance bug)\n- Validates fixes for el-aob (streaming cursors)\n\n**Priority**: P3 - Nice to have, tracks quality over time\n**Effort**: 2-3 days","status":"open","priority":3,"issue_type":"task","created_at":"2025-12-30T17:46:14.715332+11:00","created_by":"drew","updated_at":"2025-12-30T17:46:14.715332+11:00"} {"id":"el-ffc","title":"EXPLAIN Query Support","description":"Not implemented in ecto_libsql. libSQL 3.45.1 fully supports EXPLAIN and EXPLAIN QUERY PLAN for query optimiser insight.\n\nDesired API:\n query = from u in User, where: u.age \u003e 18\n {:ok, plan} = Repo.explain(query)\n # Or: Ecto.Adapters.SQL.explain(Repo, :all, query)\n\nPRIORITY: Recommended as #3 in implementation order - quick win for debugging.\n\nEffort: 2-3 days.","status":"open","priority":2,"issue_type":"feature","created_at":"2025-12-30T17:35:52.299542+11:00","created_by":"drew","updated_at":"2025-12-30T17:43:32.763016+11:00"} {"id":"el-fpi","title":"Fix binary data round-trip property test failure for single null byte","description":"## Problem\n\nThe property test for binary data handling is failing when the generated binary is a single null byte ().\n\n## Failure Details\n\n\n\n**File**: test/fuzz_test.exs:736\n**Test**: property binary data handling round-trips binary data correctly\n\n## Root Cause\n\nWhen a single null byte () is stored in the database as a BLOB and retrieved, it's being returned as an empty string () instead of the original binary.\n\nThis suggests a potential issue with:\n1. Binary encoding/decoding in the Rust NIF layer (decode.rs)\n2. Type conversion in the Elixir loaders/dumpers\n3. Handling of edge case binaries (single null byte, empty blobs)\n\n## Impact\n\n- Property-based test failures indicate the binary data handling isn't robust for all valid binary inputs\n- Applications storing binary data with null bytes may experience data corruption\n- Affects blob storage reliability\n\n## Reproduction\n\n\n\n## Investigation Areas\n\n1. **native/ecto_libsql/src/decode.rs** - Check Value::Blob conversion\n2. **lib/ecto/adapters/libsql.ex** - Check binary loaders/dumpers\n3. **native/ecto_libsql/src/query.rs** - Verify blob retrieval logic\n4. **Test edge cases**: , , , \n\n## Expected Behavior\n\nAll binaries (including single null byte) should round-trip correctly:\n- Store → Retrieve \n- Store → Retrieve \n- Store → Retrieve \n\n## Related Code\n\n- test/fuzz_test.exs:736-753\n- native/ecto_libsql/src/decode.rs (blob handling)\n- lib/ecto/adapters/libsql.ex (type loaders/dumpers)","status":"in_progress","priority":1,"issue_type":"bug","created_at":"2025-12-30T18:05:52.838065+11:00","created_by":"drew","updated_at":"2025-12-30T21:59:49.842445+11:00"} From af2ce1a88bdd7602613b104ec712c9c657aee826 Mon Sep 17 00:00:00 2001 From: Drew Robinson Date: Wed, 31 Dec 2025 14:58:20 +1100 Subject: [PATCH 3/4] chore: Enable beads-sync branch in config --- .beads/config.yaml | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/.beads/config.yaml b/.beads/config.yaml index f242785..1de3590 100644 --- a/.beads/config.yaml +++ b/.beads/config.yaml @@ -42,7 +42,7 @@ # This setting persists across clones (unlike database config which is gitignored). # Can also use BEADS_SYNC_BRANCH env var for local override. # If not set, bd sync will require you to run 'bd config set sync.branch '. -# sync-branch: "beads-sync" +sync-branch: "beads-sync" # Multi-repo configuration (experimental - bd-307) # Allows hydrating from multiple repositories and routing writes to the correct JSONL @@ -59,4 +59,4 @@ # - linear.url # - linear.api-key # - github.org -# - github.repo +# - github.repo \ No newline at end of file From 86140169c49ac106dd153111af567dc22d2c9954 Mon Sep 17 00:00:00 2001 From: Drew Robinson Date: Wed, 31 Dec 2025 15:24:50 +1100 Subject: [PATCH 4/4] tests: Correct formatting, update Claude with new instructions for branching --- CLAUDE.md | 51 ++++++++++++++++++++++++++++--- test/connection_features_test.exs | 2 +- test/statement_features_test.exs | 8 ++--- 3 files changed, 51 insertions(+), 10 deletions(-) diff --git a/CLAUDE.md b/CLAUDE.md index 41c62ff..f1b886c 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -203,11 +203,52 @@ cd native/ecto_libsql && cargo test ### Development Cycle -1. Make changes to Elixir or Rust code -2. Format: `mix format && cargo fmt` -3. Run tests: `mix test && cargo test` -4. Check formatting: `mix format --check-formatted` -5. Commit with descriptive message +#### Branch Strategy + +**ALWAYS branch from `main`** for new work: + +```bash +git checkout main +git pull origin main +git checkout -b feature-descriptive-name # or bugfix-descriptive-name +``` + +**Branch naming**: +- Features: `feature-` +- Bug fixes: `bugfix-` + +**⚠️ CRITICAL: Preserving Untracked Files** + +The repository often has untracked/uncommitted files (docs, notes, etc.) that must NOT be lost when switching branches. Git preserves untracked files across branch switches automatically, but: +- **NEVER run `git clean`** without explicit user approval +- **NEVER run `git checkout .`** or `git restore .` on the whole repo +- **NEVER run `git reset --hard`** without explicit user approval +- When switching branches, untracked files stay in place - this is expected + +#### Development Steps + +1. Create feature/bugfix branch from `main` (see above) +2. Make changes to Elixir or Rust code +3. ALWAYS format: `mix format --check-formatted && cargo fmt` +4. Run tests: `mix test && cargo test` +5. Fix any issues from formatting or tests +6. **Commit ONLY files you touched** - other untracked files stay as-is + +#### PR Workflow + +All changes go through PRs to `main` (for review bot checks): + +```bash +git push -u origin feature-descriptive-name +gh pr create --base main --title "feat: description" --body "..." +``` + +After PR is merged, clean up: +```bash +git checkout main +git pull origin main +git branch -d feature-descriptive-name # Delete local branch +``` ### Adding a New NIF Function diff --git a/test/connection_features_test.exs b/test/connection_features_test.exs index 0b6295b..b5c25c9 100644 --- a/test/connection_features_test.exs +++ b/test/connection_features_test.exs @@ -219,7 +219,7 @@ defmodule EctoLibSql.ConnectionFeaturesTest do test "reset allows connection reuse in pooled scenario", %{database: database} do # Simulate connection pool behaviour - connections = + connections = Enum.map(1..3, fn _ -> {:ok, state} = EctoLibSql.connect(database: database) state diff --git a/test/statement_features_test.exs b/test/statement_features_test.exs index 36dbd4b..95a724d 100644 --- a/test/statement_features_test.exs +++ b/test/statement_features_test.exs @@ -813,11 +813,11 @@ defmodule EctoLibSql.StatementFeaturesTest do assert {:ok, 2} = EctoLibSql.Native.stmt_column_count(state, stmt_id) names = get_column_names(state, stmt_id, 2) - assert names == ["id", "age_group"] + assert names == ["id", "age_group"] - EctoLibSql.Native.close_stmt(stmt_id) - end - end + EctoLibSql.Native.close_stmt(stmt_id) + end + end # ============================================================================ # Helper Functions