Skip to content

Conversation

@EmilyMatt
Copy link

Which issue does this PR close?

Rationale for this change

Allows for proper file splitting within an asynchronous context.

What changes are included in this PR?

The raw implementation, allowing for file splitting, starting mid-block(read until sync marker is found), and further reading until end of block is found.
This reader currently requires a reader_schema is provided if type-promotion, schema-evolution, or projection are desired.
This is done so because #8928 is currently blocking proper parsing from an ArrowSchema

Are these changes tested?

Yes

Are there any user-facing changes?

Only the addition.
Other changes are internal to the crate (namely the way Decoder is created from parts)

Copy link
Contributor

@jecsand838 jecsand838 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Flushing a partial review with some high level thoughts.

I'll wait for you to finish before resuming.

@EmilyMatt
Copy link
Author

Flushing a partial review with some high level thoughts.

I'll wait for you to finish before resuming.

Honestly I think my main blocker is the schema thing here. I don't want to commit to the constructor before it is resolved as its a public API and I don't want it to be volatile

@jecsand838
Copy link
Contributor

jecsand838 commented Nov 26, 2025

Flushing a partial review with some high level thoughts.
I'll wait for you to finish before resuming.

Honestly I think my main blocker is the schema thing here. I don't want to commit to the constructor before it is resolved as its a public API and I don't want it to be volatile

100% I'm working on that right now and won't stop until I have a PR. That was a solid catch.

The schema logic is an area of the code I mean to (or would welcome) a full refactor of. I knew it would eventually come back.

@EmilyMatt
Copy link
Author

Sorry, I haven't dropped it, just found myself in a really busy week! The generic reader support does not seem to hard to implement from the dabbling I made, and I still need to get to the builder pattern change

…, separate object store file reader into a featuregated struct and use a generic async file reader trait
@EmilyMatt
Copy link
Author

@jecsand838 I believe this is now ready for a proper review^

Copy link
Contributor

@jecsand838 jecsand838 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@EmilyMatt Thank you so much for getting these changes up!

I left a few comments. Let me know what you think.

EDIT: Should have mentioned that this is looking really good overall and I'm very excited for the AsyncReader!

@alamb
Copy link
Contributor

alamb commented Jan 10, 2026

@jecsand838 and @EmilyMatt -- how is this PR looking?

@EmilyMatt
Copy link
Author

@jecsand838 and @EmilyMatt -- how is this PR looking?

I had actually just returned to work on it 2 days ago, still having some issues with the schema now being provided, due to the problems I've described, @jecsand838 suggested removing the arrow schema and I'm starting to think that is the only viable way for now.
Making the fetch API a bit closer to the one parquet uses is the smaller issue, I do wish to keep the seperate semantics for the original fetch and extra fetch(for parquet for example, that will be the row groups ranges, and the footer range), will try a couple ways to do this

@EmilyMatt
Copy link
Author

Hope to push another version today and address some of the things above

@mzabaluev
Copy link
Contributor

mzabaluev commented Jan 19, 2026

I get a ParseError("bad varint") on this test:

    fn get_int_array_schema() -> SchemaRef {
        let schema = Schema::new(vec![Field::new(
            "int_array",
            DataType::List(Arc::new(Field::new("element", DataType::Int32, true))),
            true,
        )])
        .with_metadata(HashMap::from([("avro.name".into(), "table".into())]));
        Arc::new(schema)
    }

    #[tokio::test]
    async fn test_bad_varint_bug() {
        let file = arrow_test_data("avro/bad-varint-bug.avro");

        let schema = get_int_array_schema();
        let batches = read_async_file(&file, 1024, None, schema).await.unwrap();
        let _batch = &batches[0];
    }

The Avro file, readable by Spark: bad-varint-bug.avro.gz

@mzabaluev-flarion
Copy link
Contributor

The Avro file, readable by Spark: bad-varint-bug.avro.gz

I have checked the Avro file is readable with Python avro 1.12.1:

>>> from avro.datafile import DataFileReader
>>> from avro.io import DatumReader
>>> reader = DataFileReader(open("testing/data/avro/bad-varint-bug.avro", "rb"), DatumReader())
>>> for rec in reader:
...     print(rec)
... 
{'int_array': [1, 2]}

@EmilyMatt
Copy link
Author

EmilyMatt commented Jan 20, 2026

The Avro file, readable by Spark: bad-varint-bug.avro.gz

I have checked the Avro file is readable with Python avro 1.12.1:

>>> from avro.datafile import DataFileReader
>>> from avro.io import DatumReader
>>> reader = DataFileReader(open("testing/data/avro/bad-varint-bug.avro", "rb"), DatumReader())
>>> for rec in reader:
...     print(rec)
... 
{'int_array': [1, 2]}

I don't think this is a bug in the async reader.
You are using a testing infrastructure build around Arrow schemas which have the reader schema in the metadata, but you did not provide the schema in yours.

I can confirm the following test passes:

#[tokio::test]
    async fn test_bad_varint_bug() {
        let store: Arc<dyn ObjectStore> = Arc::new(LocalFileSystem::new());
        let location = Path::from_filesystem_path("/home/emily/Downloads/bad-varint-bug.avro").unwrap();

        let file_size = store.head(&location).await.unwrap().size;

        let file_reader = AvroObjectReader::new(store, location);
        let reader = AsyncAvroFileReader::builder(file_reader, file_size, 1024)
            .try_build()
            .await.unwrap();;

        let batches: Vec<RecordBatch> = reader.try_collect().await.unwrap();
        let batch = &batches[0];
        let int_list_col = batch.column(0).as_list::<i32>();

        let first_list = int_list_col.value(0);
        let expected_result = Arc::new(Int32Array::from_iter_values(vec![1i32, 2])) as _;
        assert_eq!(first_list, expected_result)
    }

The issue is probably in the AvroSchema::from
it has various bugs I've also encountered.

@mzabaluev
Copy link
Contributor

I don't think this is a bug in the async reader. You are using a testing infrastructure build around Arrow schemas which have the reader schema in the metadata, but you did not provide the schema in yours.

My test provides the Arrow reader schema and the top-level Avro record name in the metadata, which should be sufficient.
The problem was in a schema mismatch: in the file, the array elements are not nullable.

@EmilyMatt
Copy link
Author

I don't think this is a bug in the async reader. You are using a testing infrastructure build around Arrow schemas which have the reader schema in the metadata, but you did not provide the schema in yours.

My test provides the Arrow reader schema and the top-level Avro record name in the metadata, which should be sufficient. The problem was in a schema mismatch: in the file, the array elements are not nullable.

It is not necessarily sufficient.
See
#8928

But you should open a bug for this. since a reader schema with nullables and writer schema with non-nullables should be compatible.

@jecsand838
Copy link
Contributor

jecsand838 commented Jan 21, 2026

@mzabaluev

I don't think this is a bug in the async reader. You are using a testing infrastructure build around Arrow schemas which have the reader schema in the metadata, but you did not provide the schema in yours.

My test provides the Arrow reader schema and the top-level Avro record name in the metadata, which should be sufficient. The problem was in a schema mismatch: in the file, the array elements are not nullable.

I think this is a schema resolution bug based on a quick glance over details you provided.

That being said there are limitations with using AvroSchema::try_from to create a reader schema. For now my recommendation for creating a reader schema (especially more complicated ones) is to either:

  1. Modify the writer schema's JSON
  2. Manually craft the json for an AvroSchema.
  3. Use AvroSchema::try_from, but sanitize the output and embed it into a pre-defined JSON wrapper if needed.

Originally the AvroSchema::try_from method was built for the Writer so that a correct AvroSchema is inferred from an Arrow Schema in the absence of a provided AvroSchema.

The biggest challenge to overcome relates to the lossy behavior inherent to Arrow -> Avro schema conversion, i.e. Arrow not having the concepts of named types, etc.

@EmilyMatt

The issue is probably in the AvroSchema::from
it has various bugs I've also encountered.

100%, It's absolutely not related to this PR. Sorry about not jumping in sooner to call that out.


As an aside, I just created #9233 which proposes an approach for modularizing schema.rs, adding an ArrowToAvroSchemaBuilder, and enhancing the overall AvroSchema conversion functionality. I'd love to get some feedback if either of you get an opportunity!

Copy link
Contributor

@jecsand838 jecsand838 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@EmilyMatt This looks good! I left some final feedback and recommendations, but I think it's at a place to re-run the CI/CD jobs if you wanted to follow-up on these. Once pipelines pass, I'll approve.

CC: @alamb

Comment on lines 141 to 165
// If projection exists, project the reader schema,
// if no reader schema is provided, parse it from the header(get the raw writer schema), and project that
// this projected schema will be the schema used for reading.
let projected_reader_schema = self
.projection
.as_deref()
.map(|projection| {
let base_schema = if let Some(reader_schema) = &self.reader_schema {
reader_schema.clone()
} else {
let raw = header.get(SCHEMA_METADATA_KEY).ok_or_else(|| {
ArrowError::ParseError("No Avro schema present in file header".to_string())
})?;
let json_string = std::str::from_utf8(raw)
.map_err(|e| {
ArrowError::ParseError(format!(
"Invalid UTF-8 in Avro schema header: {e}"
))
})?
.to_string();
AvroSchema::new(json_string)
};
base_schema.project(projection)
})
.transpose()?;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should probably add more test coverage to the arrow-avro/src/reader/async_reader/builder.rs file.

Screenshot 2026-01-20 at 7 14 41 PM

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added tests, not all the error cases are covered in the builder, but it looks better now

@github-actions github-actions bot removed the parquet Changes to the parquet crate label Jan 25, 2026
@EmilyMatt
Copy link
Author

@jecsand838 I've removed the parquet changes, and synced with main, I believe this is ready for last reviews and test runs before merged.

CC: @alamb

@alamb
Copy link
Contributor

alamb commented Jan 25, 2026

I started the tests

@EmilyMatt
Copy link
Author

I started the tests

Thx, most failures were technicalities, believe I fixed all of them

Copy link
Contributor

@jecsand838 jecsand838 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@EmilyMatt LGTM!

At this point I'm fine with anything else that comes up being a follow-up issue if you are @mzabaluev (unless it's major).

I just left a few final comments related to improving the docs for this PR.

@alamb

//! is enabled, [`AvroObjectReader`] provides integration with object storage services
//! such as S3 via the [object_store] crate.
//!
//! ```ignore
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's make this runnable

Suggested change
//! ```ignore
//! ```

Copy link
Author

@EmilyMatt EmilyMatt Jan 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know how to do it without failing the tests, because all the code here is featuregated, and the doctests also runs without features enabled

Comment on lines 78 to 88
/// An asynchronous Avro file reader that implements `Stream<Item = Result<RecordBatch, ArrowError>>`.
/// This uses an [`AsyncFileReader`] to fetch data ranges as needed, starting with fetching the header,
/// then reading all the blocks in the provided range where:
/// 1. Reads and decodes data until the header is fully decoded.
/// 2. Searching from `range.start` for the first sync marker, and starting with the following block.
/// (If `range.start` is less than the header length, we start at the header length minus the sync marker bytes)
/// 3. Reading blocks sequentially, decoding them into RecordBatches.
/// 4. If a block is incomplete (due to range ending mid-block), fetching the remaining bytes from the [`AsyncFileReader`].
/// 5. If no range was originally provided, reads the full file.
/// 6. If the range is 0, file_size is 0, or `range.end` is less than the header length, finish immediately.
pub struct AsyncAvroFileReader<R> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd recommend adding a good runnable example for AsyncAvroFileReader here.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added, also moved everything to use AvroError

# Conflicts:
#	arrow-avro/Cargo.toml
#	arrow-avro/src/reader/mod.rs
const DEFAULT_HEADER_SIZE_HINT: u64 = 16 * 1024; // 16 KB

/// Builder for an asynchronous Avro file reader.
pub struct AsyncAvroFileReaderBuilder<R> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This name is unwieldy, though it does not need to be imported.

I'd rather do the idiomatic Rust thing: expose the module as public and export the builder with a terse name under the module path: crate::reader::async_reader::ReaderBuilder.

This is also because I want to add another builder typestate to this API in a follow-up PR, and I don't want its name to be even longer.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is already kind of what's happening.
Renamed the builder to ReaderBuilder

let current_data = self.reader.get_bytes(range_to_fetch.clone()).await.map_err(|err| {
AvroError::General(format!(
"Error fetching Avro header from object store: {err}"
"Error fetching Avro header from object store(range: {range_to_fetch:?}): {err}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks like a debugging artefact; add a space after "store" at least?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

arrow Changes to the arrow crate arrow-avro arrow-avro crate

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Implement an async AvroReader

5 participants