Core, Parquet: Allow for Writing Parquet/Avro Manifests in V4#15634
Core, Parquet: Allow for Writing Parquet/Avro Manifests in V4#15634RussellSpitzer wants to merge 5 commits intoapache:mainfrom
Conversation
|
I have missed some deprecations and test rewrites, going to split those off to another branch to resolve first to not confuse this pr. |
anoopj
left a comment
There was a problem hiding this comment.
Code looks great to me. A couple of minor comments.
| if (reuse != null) { | ||
| this.lastList = reuse; | ||
| // reuse containers may come from a different reader (e.g. Avro) with incompatible types | ||
| this.lastList = reuse instanceof ArrayList ? reuse : null; |
There was a problem hiding this comment.
Could you add a brief comment to explain why this looks for ArrayList? (was not super obvious to me). Same for the LinkedHashMap on line 980
| FileFormat manifestFormat = FileFormat.fromFileName(inputFile.location()); | ||
| Preconditions.checkArgument( | ||
| manifestFormat == FileFormat.AVRO, | ||
| "Reading manifest metadata is only supported for Avro manifests: %s", |
There was a problem hiding this comment.
I assume read support will come in a separate PR? Not sure if it would be useful, but #14577 is a quick and dirty prototype i did last year.
There was a problem hiding this comment.
I'll look for the old discussions but we basically came to consensus that we just weren't going to read metadata out of manifests in this code path, all the deprecation work I've been doing is to remove that dependency so we won't ever have a read metadata pathway for internal data readers that aren't Avro
There was a problem hiding this comment.
Now that I fixed all the tests, there is no code inside the Iceberg project that relies on this code path.
|
Could you please check out #15656? It has about 1k more net changes to handle deprecations I missed. After that I'll come back to this one and finish it up. Thanks for checking on this PR as well! |
a5cb9ac to
41d2c09
Compare
e42de44 to
5877a84
Compare
|
@anoopj Ok! Everything is good for review now. Please if you have a moment take a look |
| @@ -18,23 +18,15 @@ | |||
| */ | |||
| package org.apache.iceberg; | |||
There was a problem hiding this comment.
I split off the Compression Benchmark into it's own file
| // For older versions where the empty struct is present, making it optional is harmless. | ||
| List<Types.NestedField> fields = Lists.newArrayList(); | ||
| fields.addAll(projection.asStruct().fields()); | ||
| for (Types.NestedField field : projection.asStruct().fields()) { |
There was a problem hiding this comment.
As the large comment above says, this is where we are handling the fact that Parquet doesn't like empty structs. We just inject an "empty - optional" parquet field so we can just get nulls in that slot.
There was a problem hiding this comment.
As mentioned above, we can clean this up after v4 data structures get added, since we are folding partition into content_stats.
There was a problem hiding this comment.
Yes i'm very excited for that. Not sure if you want to get that in first then this, or vice versa.
| } | ||
| fields.add(MetadataColumns.ROW_POSITION); | ||
|
|
||
| CloseableIterable<ManifestEntry<F>> reader = |
There was a problem hiding this comment.
Looking back we don't need to change this at all now, i'll clean that up
| fields.add(DataFile.CONTENT.asRequired()); | ||
| fields.add(DataFile.FILE_PATH); | ||
| fields.add(DataFile.FILE_FORMAT); | ||
| if (!partitionType.fields().isEmpty()) { |
There was a problem hiding this comment.
We need to omit this field for unpartitioned tables if we are using parquet because parquet doesn't support the empty struct
There was a problem hiding this comment.
In v4, we are folding partition into content_stats, so this problem goes away when we merge the v4 TrackedFile code in.
| } | ||
|
|
||
| @SuppressWarnings("checkstyle:HiddenField") | ||
| void validateSnapshot( |
There was a problem hiding this comment.
We had to do some changes here but it wasn't actually related to V4 but was exposed by the "missing partition spec" field.
Basically we had some tests like
TestReplacePartitions
That would validate against the wrong table but it happened to work because AVRO didn't care if the "SPEC" information was incorrect, it would just go ahead and read the manifests. With Parquet there is now a missing field if the table is unpartitioned so you get an actual read error when you validate with the wrong table.
| assertThat(TestTables.metadataVersion("unpartitioned")).isEqualTo(0); | ||
|
|
||
| commit(table, unpartitioned.newAppend().appendFile(FILE_A), branch); | ||
| commit(unpartitioned, unpartitioned.newAppend().appendFile(FILE_A), branch); |
There was a problem hiding this comment.
Here are tests that were being tested against the wrong table previously.
| fields.add(DataFile.CONTENT.asRequired()); | ||
| fields.add(DataFile.FILE_PATH); | ||
| fields.add(DataFile.FILE_FORMAT); | ||
| if (!partitionType.fields().isEmpty()) { |
There was a problem hiding this comment.
In v4, we are folding partition into content_stats, so this problem goes away when we merge the v4 TrackedFile code in.
| Long firstRowId, | ||
| Map<String, String> writerProperties) { | ||
| this.file = file.encryptingOutputFile(); | ||
| this.format = FileFormat.fromFileName(file.encryptingOutputFile().location()); |
There was a problem hiding this comment.
V4 will always have Parquet right? so the decision to use the right extension is made somewhere upstream?
There was a problem hiding this comment.
@rdblue and i were debating whether or not we want to let engines still choose whether they want to use Avro or not. So for the moment I'm leaving it open. We can always close it down later when we actually change the spec.
| // For older versions where the empty struct is present, making it optional is harmless. | ||
| List<Types.NestedField> fields = Lists.newArrayList(); | ||
| fields.addAll(projection.asStruct().fields()); | ||
| for (Types.NestedField field : projection.asStruct().fields()) { |
There was a problem hiding this comment.
As mentioned above, we can clean this up after v4 data structures get added, since we are folding partition into content_stats.
anoopj
left a comment
There was a problem hiding this comment.
A big thank you to doing this, BTW. The benchmarking etc is so thorough.
…et by Default Extends V4 Manifest writer to allow it to write manfiests in either Parquet or Avro based on the file extension. A default is also added to do Parquet Manifests in the SDK when the Version is 4. This could be parameterized later but that will requrie parameterizing the test suites so I decied on a single format (parquet) for now. There are a few other requried changes here outside of testing 1. Handling of splitOffsets in Parquet needs to be changed since BaseFile returns an immutable view which Parquet was attempting to re-use by clearing. 2. Unpartitioned Tables need special care since parquet cannot store empty structs in the schema. This means reading from parquet manfiests means skipping the parquet field and then changing read offsets if the partition is not defined. The read code is shared between all versions at this time so this change effects older avro readers as well. 3. Some of the tests code for TestReplacePartitions assumed that you could validate against a slightly different vesrion of the table. This is a problem if the table you make is partitioned and the validation table is unpartitioned. It use to work ... accidently I think because we would make unpartitioned operations committed to a partitioned table.
- ManifestReader: Mark partition field optional for unpartitioned tables instead of removing it from the projection, preserving positional access and avoiding ClassCastException from shifted ordinals - BaseFile: Deep copy ByteBuffer values in copyByteBufferMap to prevent Parquet container reuse from corrupting bounds in copied files, which caused equality deletes to fail stats-based overlap checks - BaseFile: Guard against null partition value in internalSet - TestRewriteTablePathsAction: Simplify manifest file predicate to use name patterns instead of file extensions
- Collapse broken builder chain in ManifestReader.open() into a single fluent expression - Extract manifest format determination in SnapshotProducer into a private field computed once in the constructor - Replace magic format version 4 with TableMetadata.MIN_FORMAT_VERSION_PARQUET_MANIFESTS in tests - Parameterize TestManifestFileUtil across all format versions - Fix TestJdbcCatalog.manifestFiles to use exclusion filter instead of allowlisting file extensions - Improve ParquetValueReaders container reuse comments to reference specific BaseFile fields
Replace instanceof-then-cast with Java 16+ pattern matching to eliminate redundant casts in outputFile() and keyMetadataBuffer().
|
Force pushed after rebasing on the test replace partitions fix #15798 |
1d1a626 to
49fe6e2
Compare

Extends V4 Manifest writer to allow it to write manifests in either Parquet or Avro based on the file extension. A default is also added to do Parquet Manifests in the SDK when the Version is 4. This could be parameterized later but that will require parameterizing the test suites so I decided on a single format (parquet) for now.
There are a few other required changes here outside of testing
Handling of splitOffsets in Parquet needs to be changed since BaseFile returns an immutable view which Parquet was attempting to re-use by clearing.
Unpartitioned Tables need special care since parquet cannot store empty structs in the schema. This means reading from parquet manifests means skipping the parquet field and then changing read offsets if the partition is not defined. The read code is shared between all versions at this time so this change effects older avro readers as well.
Some of the tests code for TestReplacePartitions assumed that you could validate against a slightly different vesrion of the table. This is a problem if the table you make is partitioned and the validation table is unpartitioned. It use to work ... accidently I think because we would make unpartitioned operations committed to a partitioned table.
--- Some Benchmarks
Note this is all done with Full reads, while we expect writes to be slower, reads should be faster when we actually do column specific projection. Since in this code the avro and parquet read paths are both doing full scans we don't expect them to be materially different.
I also deleted the old Manifest benchmarks which were specific to V1, and V1/V2 respectively and replaced them with a new benchmark which can be used on any version