Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PARQUET-2474: Add FIXED_SIZE_LIST logical type #241

Open
wants to merge 6 commits into
base: master
Choose a base branch
from

Conversation

rok
Copy link
Member

@rok rok commented May 15, 2024

As proposed in apache/arrow#34510 and on ML, PARQUET-2474.

Arrow recently introduced FixedShapeTensor and VariableShapeTensor canonical extension types that use FixedSizeList and StructArray(List, FixedSizeList) as storage respectfully. These are targeted at machine learning and scientific applications that deal with large datasets and would benefit from using Parquet as on disk storage.

However currently FixedSizeList is stored as List in Parquet which adds significant conversion overhead when reading and writing as discussed here. It would therefore be beneficial to introduce a FIXED_SIZE_LIST logical type to Parquet.

@rok
Copy link
Member Author

rok commented May 15, 2024

LogicalTypes.md Outdated Show resolved Hide resolved
Copy link
Contributor

@etseidl etseidl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting way to get lists without repetition.

LogicalTypes.md Outdated
### FIXED_SIZE_LIST

The `FIXED_SIZE_LIST` annotation represents a fixed-size list of elements
of a primitive data type. It must annotate a `binary` primitive type.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"binary" means either fixed or variable length, right? I always get confused 😅.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please provide a concrete example on how the list is structured? What about their definition & repetition levels? Intuitively, I thought not limit it to binary type. For example, it would be possible to support something like int[N] or double[N] and even multi-dimensional list like int[M][N].

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps use byte_array in this PR (see #251).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will do, thanks!

Copy link
Member Author

@rok rok Jun 5, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please provide a concrete example on how the list is structured? What about their definition & repetition levels? Intuitively, I thought not limit it to binary type. For example, it would be possible to support something like int[N] or double[N] and even multi-dimensional list like int[M][N].

I would represent the fixed sized list as a non-nested FIXED_LEN_BYTE_ARRAY + type + num_values. Multidimensional lists/arrays bring much more complexity that I'm not sure makes sense to store as a logical type (see FixedShapeTensor in Arrow). Also see #241 (comment).

Perhaps use byte_array in this PR (see #251).

Done.

LogicalTypes.md Outdated Show resolved Hide resolved
LogicalTypes.md Outdated Show resolved Hide resolved
@tustvold
Copy link
Contributor

tustvold commented May 15, 2024

One thing to perhaps give thought to is how this might represent nested lists, say you wanted to encode a m by n matrix, would you just encode this as a m * n list or do we want to support this as a first-class concept?

I had perhaps been anticipating that fixed size list would be a variant of "REPEATED" as opposed to a physical type, that is just able to avoid incrementing the max_def_level and max_rep_level. This would make it significantly more flexible I think, although I concede it will make it harder to implement.

@wgtmac
Copy link
Member

wgtmac commented May 16, 2024

cc @JFinis

LogicalTypes.md Outdated Show resolved Hide resolved
struct EnumType {} // allowed for BINARY, must be encoded with UTF-8
struct DateType {} // allowed for INT32
struct Float16Type {} // allowed for FIXED[2], must encoded raw FLOAT16 bytes
struct FixedSizeListType {} // see LogicalTypes.md
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Something is missing here. Shouldn't this type contain the element type? And the length of the list? The length of the list could be deduced from the size of the underlying fixed_len_byte_array, but at least the element type would be necessary then.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed to:

struct FixedSizeListType {        // allowed for FIXED_LEN_BYTE_ARRAY[num_values * width of type],
    1: required Type type;        // see LogicalTypes.md
    2: required i32 num_values;
}
struct VariableSizeListType {     // allowed for BYTE_ARRAY, see LogicalTypes.md
    1: required Type type;
}

@@ -255,6 +255,16 @@ The primitive type is a 2-byte fixed length binary.

The sort order for `FLOAT16` is signed (with special handling of NANs and signed zeros); it uses the same [logic](https://github.com/apache/parquet-format#sort-order) as `FLOAT` and `DOUBLE`.

### FIXED_SIZE_LIST
Copy link
Contributor

@JFinis JFinis May 16, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting choice to annotate a binary primitive field instead of a repeated group field. I see pros and cons with this design:

PROs:

  • Guarantees zero-copy, as the layout is defined to be just bytes. In contrast, would this annotate a group, a writer could decide to use a fancy per-value encoding (e.g., dictionary) and thus create a list that first has to be "decoded" before it can be used.
  • Guarantees that a list is always contained on one page instead of being split over multiple pages. Again, this helps in keeping decoders easy and guaranteeing zero copy.
  • This solves the problem of redundant R-Levels. Since it's just a primitive column, no r-level considerations have to be taken into account.

CONs:

  • Cannot create fixed size lists of nested types (e.g., list of structs). I see that this isn't necessary for tensors or embedding vectors, but shouldn't the feature be extensible for other scenarios as well? This limits the composability of the feature. I can now create a struct of fixed size lists, but not a fixed size list of structs.
  • Cannot have null elements in fixed size lists. This might not be desired for all lists, but there can be use cases where having null values in them is preferrable.
  • Parquet has a concept for (non-fixed size) lists. It is conceptually weird that fixed size lists are totally different from (non-fixed size) lists.

I think the PROs outweigh the CONs here, so I think this is fine with me. I just want everyone to be aware about the ramifications.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc @tustvold, as you also brought up this point. I agree that having a new property of a repeated group would be more flexible, but it also comes at some cost, as outlined above. Also, it couldn't be just a logical type in this case, as a logical type cannot change the handling of R-Levels.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm now feeling that maybe wrapping a Vector[PrimitiveType, Size] is also ok, but currently representing this is a bitweird in the model. May I ask would a Vector having data below?

1. [1, 1, 1], [null, 1, 1] <-- data with null
2. null, [1, 1, 1] <-- null vector

And would vector contains a "nested" vector?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • This solves the problem of redundant R-Levels. Since it's just a primitive column, no r-level considerations have to be taken into account.

This is the main reason I'd like to propose this type, see apache/arrow#34510.

  • Cannot create fixed size lists of nested types (e.g., list of structs). I see that this isn't necessary for tensors or embedding vectors, but shouldn't the feature be extensible for other scenarios as well? This limits the composability of the feature. I can now create a struct of fixed size lists, but not a fixed size list of structs.

Lack of composability is a downside, but I think it's still worth the compromise. I've not seen need for fixed_size_list(struct) in tensor computing, but that's probably just because it's not available.

  • Cannot have null elements in fixed size lists. This might not be desired for all lists, but there can be use cases where having null values in them is preferrable.

In tensor computation this is usually addressed with bitmasks, which can be stored as a fixed_size_list(binary, num_values).

  • Parquet has a concept for (non-fixed size) lists. It is conceptually weird that fixed size lists are totally different from (non-fixed size) lists.

Perhaps we should call this type FixedSizeArray to disambiguate?

I'm now feeling that maybe wrapping a Vector[PrimitiveType, Size] is also ok, but currently representing this is a bitweird in the model. May I ask would a Vector having data below?

1. [1, 1, 1], [null, 1, 1] <-- data with null
2. null, [1, 1, 1] <-- null vector

And would vector contains a "nested" vector?

I think case 2. is ok, but case 1. should be expressed with a separate null bitmask that's not part of the type.

@rok
Copy link
Member Author

rok commented Jun 5, 2024

Apologies for taking a while to reply.

I've split this into two cases: FixedSizeListType (length is constant) and VariableSizeListType (length differs per row) for the sake of discussion. I would move VariableSizeListType into a separate PR if we even decide it is needed next to ListType.

One thing to perhaps give thought to is how this might represent nested lists, say you wanted to encode a m by n matrix, would you just encode this as a m * n list or do we want to support this as a first-class concept?

We could start with a more general multidimensional array definition and have list be a 1 dimensional array. Additional metadata required would not be that bad. I'm just a bit scared of validation and striding logic bleeding into parquet implementations. Do we have any other inputs / opinions?

I had perhaps been anticipating that fixed size list would be a variant of "REPEATED" as opposed to a physical type, that is just able to avoid incrementing the max_def_level and max_rep_level. This would make it significantly more flexible I think, although I concede it will make it harder to implement.

That's interesting. What would you expect performance wise with this approach?

@rok rok requested review from tustvold, mapleFU and wgtmac June 12, 2024 15:43
@rok rok marked this pull request as ready for review June 19, 2024 23:15
@rok rok requested review from etseidl and JFinis June 19, 2024 23:16
Copy link
Contributor

@etseidl etseidl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking good to me. Just a few questions/comments. Thanks!

LogicalTypes.md Outdated
Comment on lines 264 to 265
The `FIXED_LEN_BYTE_ARRAY` data is interpreted as a fixed size sequence of
elements of the same primitive data type.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should the encoding be defined as well, for instance the elements of the array are encoded in the same manner as PLAIN encoding?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that seems like a thing to specify. Changed to:

The `FIXED_LEN_BYTE_ARRAY` data is interpreted as a fixed size sequence of
elements of the same primitive data type encoded with plain encoding.

LogicalTypes.md Outdated
### FIXED_SIZE_LIST

The `FIXED_SIZE_LIST` annotation represents a fixed-size list of elements
of a primitive data type. It must annotate a `FIXED_LEN_BYTE_ARRAY` primitive type.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As written, the elements can themselves be arrays. Is this intended? Or should it be "non-array primitive data type"?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't really consider the possibility of elements being arrays and I think non-array limitation makes sense. Changed to:

The `FIXED_SIZE_LIST` annotation represents a fixed-size list of elements
of a non-array primitive data type. It must annotate a `FIXED_LEN_BYTE_ARRAY` primitive type.

LogicalTypes.md Outdated Show resolved Hide resolved
@rok rok requested a review from etseidl June 24, 2024 17:19
@rok
Copy link
Member Author

rok commented Jun 24, 2024

Thanks for the review @etseidl ! I've updated this with your suggestions.

@alippai
Copy link

alippai commented Jun 24, 2024

@ritchie46 would this be useful for your new polars Array type?

@alippai
Copy link

alippai commented Oct 18, 2024

@rok is there anything I can help with?

@mapleFU I saw your questions above. Are you satisfied with the answers?

@coastalwhite I see you are familiar with Parquet and Array in Polars. Do you think this proposal is useful for your project?

@coastalwhite
Copy link

coastalwhite commented Oct 21, 2024

I like the general idea of moving FixedSizeList partially away from List and towards FixedSizeBinary, but I doubt it would lead to serious speedups or simplification possibilities.

The List based deserializer most of the time already batches decoding similarly to what this would allow, although it would allow skipping many checks that happen before the actual deserialization takes place. We would also still need to support the old path for a long time, since a lot of people write parquet files using old versions of the parquet specification and generally use old parquet files.

The one potentially large upside I can imagine of this is getting dictionary encoding for array's, but I am not sure how common that will be in real-world scenarios.

In general, I would say I am in favor. Although, I am not 100% convinced yet that the added complexity will result in significant performance, file size or other benefits.

@alippai
Copy link

alippai commented Oct 21, 2024

@coastalwhite there is a 10x penalty in Polars 1.9.0 parquet reading as well using this snippet: apache/arrow#34510 (comment)

@rok
Copy link
Member Author

rok commented Oct 21, 2024

@rok is there anything I can help with?

@alippai thanks for pinging. I was advised on the parquet sync call to re-open a ML discussion on this, but I need a couple of weeks to get to it. If you'd like you can start it now, here's the existing thread: https://lists.apache.org/thread/xot5f3ghhtc82n1bf0wdl9zqwlrzqks3
I suppose it'd be useful to report on the pros and cons discussed here and propose we move forward.

@coastalwhite
Copy link

@coastalwhite there is a 10x penalty in Polars 1.9.0 parquet reading as well using this snippet: apache/arrow#34510 (comment)

Thank you for putting that to my attention. Still, I feel like that is more of a bug than an inherent performance problem in the Parquet file format. However, it is probably easier to optimize for what is proposed in this PR.

@alippai
Copy link

alippai commented Oct 21, 2024

@rok based on the ML discussion we should add the fast path in the cases of polars, arrow and arrow-rs where we know the fixed size already (from schema stored in the metadata or if it's provided by the consumer). This is more fragile and less universal, but maybe a good first step forward

@rok
Copy link
Member Author

rok commented Oct 22, 2024

@rok based on the ML discussion we should add the fast path in the cases of polars, arrow and arrow-rs where we know the fixed size already (from schema stored in the metadata or if it's provided by the consumer). This is more fragile and less universal, but maybe a good first step forward

@alippai are you sure we have a strong enough consensus yet to start implementing fast paths? I would really like to have some more discussion before committing.

@alippai
Copy link

alippai commented Oct 22, 2024

@rok Sorry, wrong phrasing. I meant that was the recommendation to explore on the ML and by @coastalwhite.

I didn’t see objections adding this feature to the parquet format or commitments for adding the fast path to any of the libraries (arrow cpp actually noted it’s a non-trivial part of the codebase)

@rok
Copy link
Member Author

rok commented Oct 22, 2024

Sorry for my abundance of caution @alippai. I'll try to summarize this thread to the ML and ask for some more input ASAP. It would be nice to actually start some work on this.

@tustvold
Copy link
Contributor

tustvold commented Oct 22, 2024

Some points in no particular order:

  • The parquet schema is authoritative, with any other schema information merely a hint, this makes the notion of using the arrow schema, or something else to drive decode a little dubious
  • The record shredding logic for lists is the single most complex, confusing and subtle aspect of any parquet reader, which:
    • Limits the pool of people who can implement / review such changes
    • Sets a very high bar for including such changes
  • Even some optimal record shredding setup will never perform better than an implementation that can simply skip it entirely
  • Both arrow-rs and polars exploit that the hybrid RLE is effectively a bitmask if the max definition level is only 1, this allows for very efficient decode. This isn't possible when there are repetition levels
  • Performant record skipping, e.g. for predicate/index pushdown or late materialization, is not really possible against data with repetition levels 1.
  • Many readers have quirky support for repetition levels and lists in general, especially w.r.t areas where the specification has been ambiguous in the past (and some where it still is), finding ways for people to avoid these pain points seems valuable

That's all to say providing a way to encode fixed size lists seems like a very useful capability. That being said, it does seem to be a bit of a hack to make this a logical type, and will potentially limit the options for encodings, statistics, sort orders, etc... In particular the lack of dictionary encoding I could see being a non-trivial sacrifice.

1. In fact I think arrow-rs may be one of the few readers that actually implements it

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants