1 -
Geospatial Definitions
This document contains the specification of geospatial types and statistics.
Background
The Geometry and Geography class hierarchy and its Well-Known Text (WKT) and
Well-Known Binary (WKB) serializations (ISO variant supporting XY, XYZ, XYM,
XYZM) are defined by OpenGIS Implementation Specification for Geographic
information - Simple feature access - Part 1: Common architecture,
from OGC(Open Geospatial Consortium).
The version of the OGC standard first used here is 1.2.1, but future versions
may also be used if the WKB representation remains wire-compatible.
Coordinate Reference System
Coordinate Reference System (CRS) is a mapping of how coordinates refer to
locations on Earth.
The default CRS OGC:CRS84 means that the geospatial features must be stored
in the order of longitude/latitude based on the WGS84 datum.
Custom CRS can be specified by a string value. It is recommended to use an
identifier-based approach like Spatial reference identifier.
For geographic CRS, longitudes are bound by [-180, 180] and latitudes are bound
by [-90, 90].
Edge Interpolation Algorithm
An algorithm for interpolating edges, and is one of the following values:
Logical Types
Two geospatial logical type annotations are supported:
GEOMETRY: geospatial features in the WKB format with linear/planar edges interpolation. See GeometryGEOGRAPHY: geospatial features in the WKB format with an explicit (non-linear/non-planar) edges interpolation algorithm. See Geography
Statistics
GeospatialStatistics is a struct specific for GEOMETRY and GEOGRAPHY
logical types to store statistics of a column chunk. It is an optional field in
the ColumnMetaData and contains Bounding Box and Geospatial
Types that are described below in detail.
Bounding Box
A geospatial instance has at least two coordinate dimensions: X and Y for 2D
coordinates of each point. Please note that X is longitude/easting and Y is
latitude/northing. A geospatial instance can optionally have Z and/or M values
associated with each point.
The Z values introduce the third dimension coordinate. Usually they are used to
indicate the height, or elevation.
M values are an opportunity for a geospatial instance to track a value in a
fourth dimension. These values can be used as a linear reference value (e.g.,
highway milepost value), a timestamp, or some other value as defined by the CRS.
Bounding box is defined as the thrift struct below in the representation of
min/max value pair of coordinates from each axis. Note that X and Y Values are
always present. Z and M are omitted for 2D geospatial instances.
When calculating a bounding box, null or NaN values in a coordinate
dimension are skipped. For example, POINT (1 NaN) contributes a value to X
but no values to Y, Z, or M dimension of the bounding box. If a dimension has
only null or NaN values, that dimension is omitted from the bounding box. If
either the X or Y dimension is missing, then the bounding box itself is not
produced.
For the X values only, xmin may be greater than xmax. In this case, an object
in this bounding box may match if it contains an X such that x >= xmin OR
x <= xmax. This wraparound occurs only when the corresponding bounding box
crosses the antimeridian line. In geographic terminology, the concepts of xmin,
xmax, ymin, and ymax are also known as westernmost, easternmost,
southernmost and northernmost, respectively.
For GEOGRAPHY types, X and Y values are restricted to the canonical ranges of
[-180, 180] for X and [-90, 90] for Y.
struct BoundingBox {
1: required double xmin;
2: required double xmax;
3: required double ymin;
4: required double ymax;
5: optional double zmin;
6: optional double zmax;
7: optional double mmin;
8: optional double mmax;
}
Geospatial Types
A list of geospatial types from all instances in the GEOMETRY or GEOGRAPHY
column, or an empty list if they are not known.
This is borrowed from geometry_types of GeoParquet except that
values in the list are WKB (ISO-variant) integer codes.
Table below shows the most common geospatial types and their codes:
| Type | XY | XYZ | XYM | XYZM |
|---|
| Point | 0001 | 1001 | 2001 | 3001 |
| LineString | 0002 | 1002 | 2002 | 3002 |
| Polygon | 0003 | 1003 | 2003 | 3003 |
| MultiPoint | 0004 | 1004 | 2004 | 3004 |
| MultiLineString | 0005 | 1005 | 2005 | 3005 |
| MultiPolygon | 0006 | 1006 | 2006 | 3006 |
| GeometryCollection | 0007 | 1007 | 2007 | 3007 |
In addition, the following rules are applied:
- A list of multiple values indicates that multiple geospatial types are present (e.g.
[0003, 0006]). - An empty array explicitly signals that the geospatial types are not known.
- The geospatial types in the list must be unique (e.g.
[0001, 0001] is not valid).
CRS Customization
CRS is represented as a string value. Writer and reader implementations are
responsible for serializing and deserializing the CRS, respectively.
As a convention to maximize the interoperability, custom CRS values can be
specified by a string of the format type:identifier, where type is one of
the following values:
srid: Spatial reference identifier, identifier is the SRID itself.projjson: PROJJSON, identifier is the name of a table property or a file property where the projjson string is stored.
Coordinate axis order
The axis order of the coordinates in WKB and bounding box stored in Parquet
follows the de facto standard for axis order in WKB and is therefore always
(x, y) where x is easting or longitude and y is northing or latitude. This
ordering explicitly overrides the axis order as specified in the CRS.
2 -
Parquet Logical Type Definitions
Logical types are used to extend the types that parquet can be used to store,
by specifying how the primitive types should be interpreted. This keeps the set
of primitive types to a minimum and reuses parquet’s efficient encodings. For
example, strings are stored with the primitive type BYTE_ARRAY with a STRING
annotation.
This file contains the specification for all logical types.
The parquet format’s LogicalType stores the type annotation. The annotation
may require additional metadata fields, as well as rules for those fields.
There is an older representation of the logical type annotations called ConvertedType.
To support backward compatibility with old files, readers should interpret LogicalTypes
in the same way as ConvertedType, and writers should populate ConvertedType in the metadata
according to well defined conversion rules.
Compatibility
The Thrift definition of the metadata has two fields for logical types: ConvertedType and LogicalType.
ConvertedType is an enum of all available annotations. Since Thrift enums can’t have additional type parameters,
it is cumbersome to define additional type parameters, like decimal scale and precision
(which are additional 32 bit integer fields on SchemaElement, and are relevant only for decimals) or time unit
and UTC adjustment flag for Timestamp types. To overcome this problem, a new logical type representation was introduced into
the metadata to replace ConvertedType: LogicalType. The new representation is a union of structs of logical types,
this way allowing more flexible API, logical types can have type parameters.
ConvertedType is deprecated. However, to maintain compatibility with old writers,
Parquet readers should be able to read and interpret ConvertedType annotations
in case LogicalType annotations are not present. Parquet writers must always write
LogicalType annotations where applicable, but must also write the corresponding
ConvertedType annotations (if any) to maintain compatibility with old readers.
Compatibility considerations are mentioned for each annotation in the corresponding section.
String Types
STRING
STRING may only be used to annotate the BYTE_ARRAY primitive type and indicates
that the byte array should be interpreted as a UTF-8 encoded character string.
The sort order used for STRING strings is unsigned byte-wise comparison.
Compatibility
STRING corresponds to UTF8 ConvertedType.
ENUM
ENUM annotates the BYTE_ARRAY primitive type and indicates that the value
was converted from an enumerated type in another data model (e.g. Thrift, Avro, Protobuf).
Applications using a data model lacking a native enum type should interpret ENUM
annotated field as a UTF-8 encoded string.
The sort order used for ENUM values is unsigned byte-wise comparison.
UUID
UUID annotates a 16-byte FIXED_LEN_BYTE_ARRAY primitive type. The value is
encoded using big-endian, so that 00112233-4455-6677-8899-aabbccddeeff is encoded
as the bytes 00 11 22 33 44 55 66 77 88 99 aa bb cc dd ee ff
(This example is from wikipedia’s UUID page).
The sort order used for UUID values is unsigned byte-wise comparison.
Numeric Types
Signed Integers
INT annotation can be used to specify the maximum number of bits in the stored value.
The annotation has two parameters: bit width and sign.
Allowed bit width values are 8, 16, 32, 64, and sign can be true or false.
For signed integers, the second parameter should be true,
for example, a signed integer with bit width of 8 is defined as INT(8, true)
Implementations may use these annotations to produce smaller
in-memory representations when reading data.
If a stored value is larger than the maximum allowed by the annotation, the
behavior is not defined and can be determined by the implementation.
Implementations must not write values that are larger than the annotation
allows.
INT(8, true), INT(16, true), and INT(32, true) must annotate an int32 primitive type and
INT(64, true) must annotate an int64 primitive type. INT(32, true) and INT(64, true) are
implied by the int32 and int64 primitive types if no other annotation is
present and should be considered optional.
The sort order used for signed integer types is signed.
Unsigned Integers
INT annotation can be used to specify unsigned integer types,
along with a maximum number of bits in the stored value.
The annotation has two parameters: bit width and sign.
Allowed bit width values are 8, 16, 32, 64, and sign can be true or false.
In case of unsigned integers, the second parameter should be false,
for example, an unsigned integer with bit width of 8 is defined as INT(8, false)
Implementations may use these annotations to produce smaller
in-memory representations when reading data.
If a stored value is larger than the maximum allowed by the annotation, the
behavior is not defined and can be determined by the implementation.
Implementations must not write values that are larger than the annotation
allows.
INT(8, false), INT(16, false), and INT(32, false) must annotate an int32 primitive type and
INT(64, false) must annotate an int64 primitive type.
The sort order used for unsigned integer types is unsigned.
Deprecated integer ConvertedType
INT_8, INT_16, INT_32, and INT_64 annotations can be also used to specify
signed integers with 8, 16, 32, or 64 bit width.
INT_8, INT_16, and INT_32 must annotate an int32 primitive type and
INT_64 must annotate an int64 primitive type. INT_32 and INT_64 are
implied by the int32 and int64 primitive types if no other annotation is
present and should be considered optional.
UINT_8, UINT_16, UINT_32, and UINT_64 annotations can be also used to specify
unsigned integers with 8, 16, 32, or 64 bit width.
UINT_8, UINT_16, and UINT_32 must annotate an int32 primitive type and
UINT_64 must annotate an int64 primitive type.
Backward compatibility:
| ConvertedType | LogicalType |
|---|
| INT_8 | IntType (bitWidth = 8, isSigned = true) |
| INT_16 | IntType (bitWidth = 16, isSigned = true) |
| INT_32 | IntType (bitWidth = 32, isSigned = true) |
| INT_64 | IntType (bitWidth = 64, isSigned = true) |
| UINT_8 | IntType (bitWidth = 8, isSigned = false) |
| UINT_16 | IntType (bitWidth = 16, isSigned = false) |
| UINT_32 | IntType (bitWidth = 32, isSigned = false) |
| UINT_64 | IntType (bitWidth = 64, isSigned = false) |
Forward compatibility:
| LogicalType | ConvertedType | |
|---|---|---|
| IntType | isSigned | bitWidth = 8 | INT_8 |
| bitWidth = 16 | INT_16 |
| bitWidth = 32 | INT_32 |
| bitWidth = 64 | INT_64 |
| !isSigned | bitWidth = 8 | UINT_8 |
| bitWidth = 16 | UINT_16 |
| bitWidth = 32 | UINT_32 |
| bitWidth = 64 | UINT_64 |
DECIMAL
DECIMAL annotation represents arbitrary-precision signed decimal numbers of
the form unscaledValue * 10^(-scale).
The primitive type stores an unscaled integer value. For BYTE_ARRAY and
FIXED_LEN_BYTE_ARRAY, the unscaled number must be encoded as two’s complement using
big-endian byte order (the most significant byte is the zeroth element). The
scale stores the number of digits of that value that are to the right of the
decimal point, and the precision stores the maximum number of digits supported
in the unscaled value.
If not specified, the scale is 0. Scale must be zero or a positive integer less
than or equal to the precision. Precision is required and must be a non-zero positive
integer. A precision too large for the underlying type (see below) is an error.
DECIMAL can be used to annotate the following types:
int32: for 1 <= precision <= 9int64: for 1 <= precision <= 18; precision < 10 will produce a
warningfixed_len_byte_array: precision is limited by the array size. Length n
can store <= floor(log_10(2^(8*n - 1) - 1)) base-10 digitsbyte_array: precision is not limited, but is required. The minimum number of
bytes to store the unscaled value should be used.
The sort order used for DECIMAL values is signed comparison of the represented
value.
If the column uses int32 or int64 physical types, then signed comparison of
the integer values produces the correct ordering. If the physical type is
fixed, then the correct ordering can be produced by flipping the
most-significant bit in the first byte and then using unsigned byte-wise
comparison.
Compatibility
To support compatibility with older readers, implementations of parquet-format should
write DecimalType precision and scale into the corresponding SchemaElement field in metadata.
FLOAT16
The FLOAT16 annotation represents half-precision floating-point numbers in the 2-byte IEEE little-endian format.
Used in contexts where precision is traded off for smaller footprint and potentially better performance.
The primitive type is a 2-byte FIXED_LEN_BYTE_ARRAY.
The sort order for FLOAT16 is signed (with special handling of NANs and signed zeros); it uses the same logic as FLOAT and DOUBLE.
Temporal Types
DATE
DATE is used for a logical date type, without a time of day. It must
annotate an int32 that stores the number of days from the Unix epoch, 1
January 1970.
The sort order used for DATE is signed.
TIME
TIME is used for a logical time type without a date with millisecond or microsecond precision.
The type has two type parameters: UTC adjustment (true or false)
and unit (MILLIS or MICROS, NANOS).
TIME with unit MILLIS is used for millisecond precision.
It must annotate an int32 that stores the number of
milliseconds after midnight.
TIME with unit MICROS is used for microsecond precision.
It must annotate an int64 that stores the number of
microseconds after midnight.
TIME with unit NANOS is used for nanosecond precision.
It must annotate an int64 that stores the number of
nanoseconds after midnight.
The sort order used for TIME is signed.
Deprecated time ConvertedType
TIME_MILLIS is the deprecated ConvertedType counterpart of a TIME logical
type that is UTC normalized and has MILLIS precision. Like the logical type
counterpart, it must annotate an int32.
TIME_MICROS is the deprecated ConvertedType counterpart of a TIME logical
type that is UTC normalized and has MICROS precision. Like the logical type
counterpart, it must annotate an int64.
Despite there is no exact corresponding ConvertedType for local time semantic,
in order to support forward compatibility with those libraries, which annotated
their local time with legacy TIME_MICROS and TIME_MILLIS annotation,
Parquet writer implementation must annotate local time with legacy annotations too,
as shown below.
Backward compatibility:
| ConvertedType | LogicalType |
|---|
| TIME_MILLIS | TimeType (isAdjustedToUTC = true, unit = MILLIS) |
| TIME_MICROS | TimeType (isAdjustedToUTC = true, unit = MICROS) |
Forward compatibility:
| LogicalType | ConvertedType | |
|---|---|---|
| TimeType | isAdjustedToUTC = true | unit = MILLIS | TIME_MILLIS |
| unit = MICROS | TIME_MICROS |
| unit = NANOS | - |
| isAdjustedToUTC = false | unit = MILLIS | TIME_MILLIS |
| unit = MICROS | TIME_MICROS |
| unit = NANOS | - |
TIMESTAMP
In data annotated with the TIMESTAMP logical type, each value is a single
int64 number that can be decoded into year, month, day, hour, minute, second
and subsecond fields using calculations detailed below. Please note that a value
defined this way does not necessarily correspond to a single instant on the
time-line and such interpretations are allowed on purpose.
The TIMESTAMP type has two type parameters:
isAdjustedToUTC must be either true or false.unit must be one of MILLIS, MICROS or NANOS. This list is subject
to potential expansion in the future. Upon reading, unknown unit-s must
be handled as unsupported features (rather than as errors in the data files).
Instant semantics (timestamps normalized to UTC)
A TIMESTAMP with isAdjustedToUTC=true is defined as the number of
milliseconds, microseconds or nanoseconds (depending on the unit
parameter being MILLIS, MICROS or NANOS, respectively) elapsed since the
Unix epoch, 1970-01-01 00:00:00 UTC. Each such value unambiguously identifies a
single instant on the time-line.
For example, in a TIMESTAMP(isAdjustedToUTC=true, unit=MILLIS), the
number 172800000 corresponds to 1970-01-03 00:00:00 UTC, because it is equal to
2 * 24 * 60 * 60 * 1000, therefore it is exactly two days from the reference
point, the Unix epoch. In Java, this calculation can be achieved by calling
Instant.ofEpochMilli(172800000).
As a slightly more complicated example, if one wants to store 1970-01-03
00:00:00 (UTC+01:00) as a TIMESTAMP(isAdjustedToUTC=true, unit=MILLIS),
first the time zone offset has to be dealt with. By normalizing the timestamp to
UTC, we calculate what time in UTC corresponds to the same instant: 1970-01-02
23:00:00 UTC. This is 1 day and 23 hours after the epoch, therefore it can be
encoded as the number (24 + 23) * 60 * 60 * 1000 = 169200000.
Please note that time zone information gets lost in this process. Upon reading a
value back, we can only reconstruct the instant, but not the original
representation. In practice, such timestamps are typically displayed to users in
their local time zones, therefore they may be displayed differently depending on
the execution environment.
Local semantics (timestamps not normalized to UTC)
A TIMESTAMP with isAdjustedToUTC=false represents year, month, day, hour,
minute, second and subsecond fields in a local timezone, regardless of what
specific time zone is considered local. This means that such timestamps should
always be displayed the same way, regardless of the local time zone in effect.
On the other hand, without additional information such as an offset or
time-zone, these values do not identify instants on the time-line unambiguously,
because the corresponding instants would depend on the local time zone.
Using a single number to represent a local timestamp is a lot less intuitive
than for instants. One must use a local timestamp as the reference point and
calculate the elapsed time between the actual timestamp and the reference point.
The problem is that the result may depend on the local time zone, for example
because there may have been a daylight saving time change between the two
timestamps.
The solution to this problem is to use a simplification that makes the result
easy to calculate and independent of the timezone. Treating every day as
consisting of exactly 86400 seconds and ignoring DST changes altogether allows
us to unambiguously represent a local timestamp as a difference from a reference
local timestamp. We define the reference local timestamp to be 1970-01-01
00:00:00 (note the lack of UTC at the end, as this is not an instant). This way
the encoding of local timestamp values becomes very similar to the encoding of
instant values. For example, in a TIMESTAMP(isAdjustedToUTC=false, unit=MILLIS), the number 172800000 corresponds to 1970-01-03 00:00:00
(note the lack of UTC at the end), because it is exactly two days from the
reference point (172800000 = 2 * 24 * 60 * 60 * 1000).
Another way to get to the same definition is to treat the local timestamp value
as if it were in UTC and store it as an instant. For example, if we treat the
local timestamp 1970-01-03 00:00:00 as if it were the instant 1970-01-03
00:00:00 UTC, we can store it as 172800000. When reading 172800000 back, we can
reconstruct the instant 1970-01-03 00:00:00 UTC and convert it to a local
timestamp as if we were in the UTC time zone, resulting in 1970-01-03
00:00:00. In Java, this can be achieved by calling
LocalDateTime.ofEpochSecond(172800, 0, ZoneOffset.UTC).
Please note that while from a practical point of view this second definition is
equivalent to the first one, from a theoretical point of view only the first
definition can be considered correct, the second one just “incidentally” leads
to the same results. Nevertheless, this second definition is worth mentioning as
well, because it is relatively widespread and it can lead to confusion,
especially due to its usage of UTC in the calculations. One can stumble upon
code, comments and specifications ambiguously stating that a timestamp “is
stored in UTC”. In some contexts, it means that it is normalized to UTC and
acts as an instant. In some other contexts though, it means the exact opposite,
namely that the timestamp is stored as if it were in UTC and acts as a
local timestamp in reality.
Common considerations
Every possible int64 number represents a valid timestamp, but depending on the
precision, the corresponding year may be outside of the practical everyday
limits and implementations may choose to only support a limited range.
On the other hand, not every combination of year, month, day, hour, minute,
second and subsecond values can be encoded into an int64. Most notably:
- An arbitrary combination of timestamp fields can not be encoded as a single
number if the values for some of the fields are outside of their normal range
(where the “normal range” corresponds to everyday usage). For example, neither
of the following can be represented in a timestamp:
- hour = -1
- hour = 25
- minute = 61
- month = 13
- day = 29, month = 2, year = any non-leap year
- Due to the range of the
int64 type, timestamps using the NANOS unit
can only represent values between 1677-09-21 00:12:43 and 2262-04-11 23:47:16.
Values outside of this range can not be represented with the NANOS
unit. (Other precisions have similar limits but those are outside of the
domain for practical everyday usage.)
The sort order used for TIMESTAMP is signed.
Deprecated timestamp ConvertedType
TIMESTAMP_MILLIS is the deprecated ConvertedType counterpart of a TIMESTAMP
logical type that is UTC normalized and has MILLIS precision. Like the logical
type counterpart, it must annotate an int64.
TIMESTAMP_MICROS is the deprecated ConvertedType counterpart of a TIMESTAMP
logical type that is UTC normalized and has MICROS precision. Like the logical
type counterpart, it must annotate an int64.
Despite there is no exact corresponding ConvertedType for local timestamp semantic,
in order to support forward compatibility with those libraries, which annotated
their local timestamps with legacy TIMESTAMP_MICROS and TIMESTAMP_MILLIS annotation,
Parquet writer implementation must annotate local timestamps with legacy annotations too,
as shown below.
Backward compatibility:
| ConvertedType | LogicalType |
|---|
| TIMESTAMP_MILLIS | TimestampType (isAdjustedToUTC = true, unit = MILLIS) |
| TIMESTAMP_MICROS | TimestampType (isAdjustedToUTC = true, unit = MICROS) |
Forward compatibility:
| LogicalType | ConvertedType | |
|---|---|---|
| TimestampType | isAdjustedToUTC = true | unit = MILLIS | TIMESTAMP_MILLIS |
| unit = MICROS | TIMESTAMP_MICROS |
| unit = NANOS | - |
| isAdjustedToUTC = false | unit = MILLIS | TIMESTAMP_MILLIS |
| unit = MICROS | TIMESTAMP_MICROS |
| unit = NANOS | - |
INTERVAL
INTERVAL is used for an interval of time. It must annotate a
fixed_len_byte_array of length 12. This array stores three little-endian
unsigned integers that represent durations at different granularities of time.
The first stores a number in months, the second stores a number in days, and
the third stores a number in milliseconds. This representation is independent
of any particular timezone or date.
Each component in this representation is independent of the others. For
example, there is no requirement that a large number of days should be
expressed as a mix of months and days because there is not a constant
conversion from days to months.
The sort order used for INTERVAL is undefined. When writing data, no min/max
statistics should be saved for this type and if such non-compliant statistics
are found during reading, they must be ignored.
Embedded Types
Embedded types do not have type-specific orderings.
JSON
JSON is used for an embedded JSON document. It must annotate a BYTE_ARRAY
primitive type. The BYTE_ARRAY data is interpreted as a UTF-8 encoded character
string of valid JSON as defined by the JSON specification
The sort order used for JSON is unsigned byte-wise comparison.
BSON
BSON is used for an embedded BSON document. It must annotate a BYTE_ARRAY
primitive type. The BYTE_ARRAY data is interpreted as an encoded BSON document as
defined by the BSON specification.
The sort order used for BSON is unsigned byte-wise comparison.
VARIANT
VARIANT is used for a Variant value. It must annotate a group. The group must
contain a field named metadata and a field named value. Both fields must have
type binary, which is also called BYTE_ARRAY in the Parquet thrift definition.
The VARIANT annotated group can be used to store either an unshredded Variant
value, or a shredded Variant value.
- The Variant group must be annotated with the
VARIANT logical type. - Both fields
value and metadata must be of type binary (called BYTE_ARRAY
in the Parquet thrift definition). - The
metadata field is required and must be a valid Variant metadata component,
as defined by the Variant binary encoding specification. - When present, the
value field must be a valid Variant value component,
as defined by the Variant binary encoding specification. - The
value field is required for unshredded Variant values. - The
value field is optional and may be null only when parts of the Variant
value are shredded according to the Variant shredding specification.
This is the expected representation of an unshredded Variant in Parquet:
optional group variant_unshredded (VARIANT) {
required binary metadata;
required binary value;
}
This is an example representation of a shredded Variant in Parquet:
optional group variant_shredded (VARIANT) {
required binary metadata;
optional binary value;
optional int64 typed_value;
}
GEOMETRY
GEOMETRY is used for geospatial features in the Well-Known Binary (WKB) format
with linear/planar edges interpolation. It must annotate a BYTE_ARRAY
primitive type. See Geospatial.md for more detail.
The type has only one type parameter:
crs: An optional string value for CRS. If unset, the CRS defaults to
"OGC:CRS84", which means that the geometries must be stored in longitude,
latitude based on the WGS84 datum.
The sort order used for GEOMETRY is undefined. When writing data, no min/max
statistics should be saved for this type and if such non-compliant statistics
are found during reading, they must be ignored.
GEOGRAPHY
GEOGRAPHY is used for geospatial features in the WKB format with an explicit
(non-linear/non-planar) edges interpolation algorithm. It must annotate a
BYTE_ARRAY primitive type. See Geospatial.md for more detail.
The type has two type parameters:
crs: An optional string value for CRS. It must be a geographic CRS, where
longitudes are bound by [-180, 180] and latitudes are bound by [-90, 90].
If unset, the CRS defaults to "OGC:CRS84".algorithm: An optional enum value to describes the edge interpolation
algorithm. Supported values are: SPHERICAL, VINCENTY, THOMAS, ANDOYER,
KARNEY. If unset, the algorithm defaults to SPHERICAL.
The sort order used for GEOGRAPHY is undefined. When writing data, no min/max
statistics should be saved for this type and if such non-compliant statistics
are found during reading, they must be ignored.
Nested Types
This section specifies how LIST and MAP can be used to encode nested types
by adding group levels around repeated fields that are not present in the data.
This does not affect repeated fields that are not annotated: A repeated field
that is neither contained by a LIST- or MAP-annotated group nor annotated
by LIST or MAP should be interpreted as a required list of required
elements where the element type is the type of the field.
WARNING: writers should not produce list types like these examples! They are
just for the purpose of reading existing data for backward-compatibility.
// List<Integer> (non-null list, non-null elements)
repeated int32 num;
// List<Tuple<Integer, String>> (non-null list, non-null elements)
repeated group my_list {
required int32 num;
optional binary str (STRING);
}
For all fields in the schema, implementations should use either LIST and
MAP annotations or unannotated repeated fields, but not both. When using
the annotations, no unannotated repeated types are allowed.
Lists
LIST is used to annotate types that should be interpreted as lists.
LIST must always annotate a 3-level structure:
<list-repetition> group <name> (LIST) {
repeated group list {
<element-repetition> <element-type> element;
}
}
- The outer-most level must be a group annotated with
LIST that contains a
single field named list. The repetition of this level must be either
optional or required and determines whether the list is nullable. - The middle level, named
list, must be a repeated group with a single
field named element. - The
element field encodes the list’s element type and repetition. Element
repetition must be required or optional.
The following examples demonstrate two of the possible lists of string values.
// List<String> (list non-null, elements nullable)
required group my_list (LIST) {
repeated group list {
optional binary element (STRING);
}
}
// List<String> (list nullable, elements non-null)
optional group my_list (LIST) {
repeated group list {
required binary element (STRING);
}
}
Element types can be nested structures. For example, a list of lists:
// List<List<Integer>>
optional group array_of_arrays (LIST) {
repeated group list {
required group element (LIST) {
repeated group list {
required int32 element;
}
}
}
}
Backward-compatibility rules
New writer implementations should always produce the 3-level LIST structure shown
above. However, historically data files have been produced that use different
structures to represent list-like data, and readers may include compatibility
measures to interpret them as intended.
It is required that the repeated group of elements is named list and that
its element field is named element. However, these names may not be used in
existing data and should not be enforced as errors when reading. For example,
the following field schema should produce a nullable list of non-null strings,
even though the repeated group is named element.
optional group my_list (LIST) {
repeated group element {
required binary str (STRING);
};
}
Some existing data does not include the inner element layer, resulting in a
LIST that annotates a 2-level structure. Unlike the 3-level structure, the
repetition of a 2-level structure can be optional, required, or repeated.
When it is repeated, the LIST-annotated 2-level structure can only serve as
an element within another LIST-annotated 2-level structure.
For backward-compatibility, the type of elements in LIST-annotated structures
should always be determined by the following rules:
- If the repeated field is not a group, then its type is the element type and
elements are required.
- If the repeated field is a group with multiple fields, then its type is the
element type and elements are required.
- If the repeated field is a group with one field with
repeated repetition,
then its type is the element type and elements are required. - If the repeated field is a group with one field and is named either
array
or uses the LIST-annotated group’s name with _tuple appended then the
repeated type is the element type and elements are required. - Otherwise, the repeated field’s type is the element type with the repeated
field’s repetition.
Examples that can be interpreted using these rules:
WARNING: writers should not produce list types like these examples! They are
just for the purpose of reading existing data for backward-compatibility.
// Rule 1: List<Integer> (nullable list, non-null elements)
optional group my_list (LIST) {
repeated int32 element;
}
// Rule 2: List<Tuple<String, Integer>> (nullable list, non-null elements)
optional group my_list (LIST) {
repeated group element {
required binary str (STRING);
required int32 num;
};
}
// Rule 3: List<List<Integer>> (nullable outer list, non-null elements)
optional group my_list (LIST) {
repeated group array (LIST) {
repeated int32 array;
};
}
// Rule 4: List<OneTuple<String>> (nullable list, non-null elements)
optional group my_list (LIST) {
repeated group array {
required binary str (STRING);
};
}
// Rule 4: List<OneTuple<String>> (nullable list, non-null elements)
optional group my_list (LIST) {
repeated group my_list_tuple {
required binary str (STRING);
};
}
// Rule 5: List<String> (nullable list, nullable elements)
optional group my_list (LIST) {
repeated group element {
optional binary str (STRING);
};
}
Maps
MAP is used to annotate types that should be interpreted as a map from keys
to values. MAP must annotate a 3-level structure:
<map-repetition> group <name> (MAP) {
repeated group key_value {
required <key-type> key;
<value-repetition> <value-type> value;
}
}
- The outer-most level must be a group annotated with
MAP that contains a
single field named key_value. The repetition of this level must be either
optional or required and determines whether the map is nullable. - The middle level, named
key_value, must be a repeated group with a key
field for map keys and, optionally, a value field for map values. It must
not contain any other values. - The
key field encodes the map’s key type. This field must have
repetition required and must always be present. It must always be the first
field of the repeated key_value group. - The
value field encodes the map’s value type and repetition. This field can
be required, optional, or omitted. It must always be the second field of
the repeated key_value group if present. In case of not present, it can be
represented as a map with all null values or as a set of keys.
The following example demonstrates the type for a non-null map from strings to
nullable integers:
// Map<String, Integer>
required group my_map (MAP) {
repeated group key_value {
required binary key (STRING);
optional int32 value;
}
}
If there are multiple key-value pairs for the same key, then the final value
for that key must be the last value. Other values may be ignored or may be
added with replacement to the map container in the order that they are encoded.
The MAP annotation should not be used to encode multi-maps using duplicate
keys.
Backward-compatibility rules
It is required that the repeated group of key-value pairs is named key_value
and that its fields are named key and value. However, these names may not
be used in existing data and should not be enforced as errors when reading.
(key and value can be identified by their position in case of misnaming.)
Some existing data incorrectly used MAP_KEY_VALUE in place of MAP. For
backward-compatibility, a group annotated with MAP_KEY_VALUE that is not
contained by a MAP-annotated group should be handled as a MAP-annotated
group.
Examples that can be interpreted using these rules:
// Map<String, Integer> (nullable map, non-null values)
optional group my_map (MAP) {
repeated group map {
required binary str (STRING);
required int32 num;
}
}
// Map<String, Integer> (nullable map, nullable values)
optional group my_map (MAP_KEY_VALUE) {
repeated group map {
required binary key (STRING);
optional int32 value;
}
}
UNKNOWN (always null)
Sometimes, when discovering the schema of existing data, values are always null
and there’s no type information.
The UNKNOWN type can be used to annotate a column that is always null.
(Similar to Null type in Avro and Arrow)
3 -
Variant Shredding
The Variant type is designed to store and process semi-structured data efficiently, even with heterogeneous values.
Query engines encode each Variant value in a self-describing format, and store it as a group containing value and metadata binary fields in Parquet.
Since data is often partially homogeneous, it can be beneficial to extract certain fields into separate Parquet columns to further improve performance.
This process is called shredding.
Shredding enables the use of Parquet’s columnar representation for more compact data encoding, column statistics for data skipping, and partial projections.
For example, the query SELECT variant_get(event, '$.event_ts', 'timestamp') FROM tbl only needs to load field event_ts, and if that column is shredded, it can be read by columnar projection without reading or deserializing the rest of the event Variant.
Similarly, for the query SELECT * FROM tbl WHERE variant_get(event, '$.event_type', 'string') = 'signup', the event_type shredded column metadata can be used for skipping and to lazily load the rest of the Variant.
Variant metadata is stored in the top-level Variant group in a binary metadata column regardless of whether the Variant value is shredded.
All value columns within the Variant must use the same metadata.
All field names of a Variant, whether shredded or not, must be present in the metadata.
Value Shredding
Variant values are stored in Parquet fields named value.
Each value field may have an associated shredded field named typed_value that stores the value when it matches a specific type.
When typed_value is present, readers must reconstruct shredded values according to this specification.
For example, a Variant field, measurement may be shredded as long values by adding typed_value with type int64:
required group measurement (VARIANT) {
required binary metadata;
optional binary value;
optional int64 typed_value;
}
The Parquet columns used to store variant metadata and values must be accessed by name, not by position.
The series of measurements 34, null, "n/a", 100 would be stored as:
| Value | metadata | value | typed_value |
|---|
| 34 | 01 00 v1/empty | null | 34 |
| null | 01 00 v1/empty | 00 (null) | null |
| “n/a” | 01 00 v1/empty | 13 6E 2F 61 (n/a) | null |
| 100 | 01 00 v1/empty | null | 100 |
Both value and typed_value are optional fields used together to encode a single value.
Values in the two fields must be interpreted according to the following table:
value | typed_value | Meaning |
|---|
| null | null | The value is missing; only valid for shredded object fields |
| non-null | null | The value is present and may be any type, including null |
| null | non-null | The value is present and is the shredded type |
| non-null | non-null | The value is present and is a partially shredded object |
An object is partially shredded when the value is an object and the typed_value is a shredded object.
Writers must not produce data where both value and typed_value are non-null, unless the Variant value is an object.
If a Variant is missing in a context where a value is required, readers must return a Variant null (00): basic type 0 (primitive) and physical type 0 (null).
For example, if a Variant is required (like measurement above) and both value and typed_value are null, the returned value must be 00 (Variant null).
Shredded Value Types
Shredded values must use the following Parquet types:
| Variant Type | Parquet Physical Type | Parquet Logical Type |
|---|
| boolean | BOOLEAN | |
| int8 | INT32 | INT(8, signed=true) |
| int16 | INT32 | INT(16, signed=true) |
| int32 | INT32 | |
| int64 | INT64 | |
| float | FLOAT | |
| double | DOUBLE | |
| decimal4 | INT32 | DECIMAL(P, S) |
| decimal8 | INT64 | DECIMAL(P, S) |
| decimal16 | BYTE_ARRAY / FIXED_LEN_BYTE_ARRAY | DECIMAL(P, S) |
| date | INT32 | DATE |
| time | INT64 | TIME(false, MICROS) |
| timestamptz(6) | INT64 | TIMESTAMP(true, MICROS) |
| timestamptz(9) | INT64 | TIMESTAMP(true, NANOS) |
| timestampntz(6) | INT64 | TIMESTAMP(false, MICROS) |
| timestampntz(9) | INT64 | TIMESTAMP(false, NANOS) |
| binary | BINARY | |
| string | BINARY | STRING |
| uuid | FIXED_LEN_BYTE_ARRAY[len=16] | UUID |
| array | GROUP; see Arrays below | LIST |
| object | GROUP; see Objects below | |
Primitive Types
Primitive values can be shredded using the equivalent Parquet primitive type from the table above for typed_value.
Unless the value is shredded as an object (see Objects), typed_value or value (but not both) must be non-null.
Arrays
Arrays can be shredded by using a 3-level Parquet list for typed_value.
If the value is not an array, typed_value must be null.
If the value is an array, value must be null.
The list element must be a required group.
The element group can contain value and typed_value fields.
The element’s value field stores the element as Variant-encoded binary when the typed_value is not present or cannot represent it.
The typed_value field may be omitted when not shredding elements as a specific type.
The value field may be omitted when shredding elements as a specific type.
However, at least one of the two fields must be present.
For example, a tags Variant may be shredded as a list of strings using the following definition:
optional group tags (VARIANT) {
required binary metadata;
optional binary value;
optional group typed_value (LIST) { # must be optional to allow a null list
repeated group list {
required group element { # shredded element
optional binary value;
optional binary typed_value (STRING);
}
}
}
}
All elements of an array must be present (not missing) because the array Variant encoding does not allow missing elements.
That is, either typed_value or value (but not both) must be non-null.
Null elements must be encoded in value as Variant null: basic type 0 (primitive) and physical type 0 (null).
The series of tags arrays ["comedy", "drama"], ["horror", null], ["comedy", "drama", "romance"], null would be stored as:
| Array | value | typed_value | typed_value...value | typed_value...typed_value |
|---|
["comedy", "drama"] | null | non-null | [null, null] | [comedy, drama] |
["horror", null] | null | non-null | [null, 00] | [horror, null] |
["comedy", "drama", "romance"] | null | non-null | [null, null, null] | [comedy, drama, romance] |
| null | 00 (null) | null | | |
Objects
Fields of an object can be shredded using a Parquet group for typed_value that contains shredded fields.
If the value is an object, typed_value must be non-null.
If the value is not an object, typed_value must be null.
Readers can assume that a value is not an object if typed_value is null and that typed_value field values are correct; that is, readers do not need to read the value column if typed_value fields satisfy the required fields.
Each shredded field in the typed_value group is represented as a required group that contains optional value and typed_value fields.
The value field stores the value as Variant-encoded binary when the typed_value cannot represent the field.
This layout enables readers to skip data based on the field statistics for value and typed_value.
The typed_value field may be omitted when not shredding fields as a specific type.
The value column of a partially shredded object must never contain fields represented by the Parquet columns in typed_value (shredded fields).
Readers may always assume that data is written correctly and that shredded fields in typed_value are not present in value.
As a result, reads when a field is defined in both value and a typed_value shredded field may be inconsistent.
For example, a Variant event field may shred event_type (string) and event_ts (timestamp) columns using the following definition:
optional group event (VARIANT) {
required binary metadata;
optional binary value; # a variant, expected to be an object
optional group typed_value { # shredded fields for the variant object
required group event_type { # shredded field for event_type
optional binary value;
optional binary typed_value (STRING);
}
required group event_ts { # shredded field for event_ts
optional binary value;
optional int64 typed_value (TIMESTAMP(true, MICROS));
}
}
}
The group for each named field must use repetition level required.
A field’s value and typed_value are set to null (missing) to indicate that the field does not exist in the variant.
To encode a field that is present with a null value, the value must contain a Variant null: basic type 0 (primitive) and physical type 0 (null).
When both value and typed_value for a field are non-null, engines should fail.
If engines choose to read in such cases, then the typed_value column must be used.
Readers may always assume that data is written correctly and that only value or typed_value is defined.
As a result, reads when both value and typed_value are defined may be inconsistent with optimized reads that require only one of the columns.
The table below shows how the series of objects in the first column would be stored:
| Event object | value | typed_value | typed_value.event_type.value | typed_value.event_type.typed_value | typed_value.event_ts.value | typed_value.event_ts.typed_value | Notes |
|---|
{"event_type": "noop", "event_ts": 1729794114937} | null | non-null | null | noop | null | 1729794114937 | Fully shredded object |
{"event_type": "login", "event_ts": 1729794146402, "email": "user@example.com"} | {"email": "user@example.com"} | non-null | null | login | null | 1729794146402 | Partially shredded object |
{"error_msg": "malformed: ..."} | {"error_msg", "malformed: ..."} | non-null | null | null | null | null | Object with all shredded fields missing |
"malformed: not an object" | malformed: not an object | null | | | | | Not an object (stored as Variant string) |
{"event_ts": 1729794240241, "click": "_button"} | {"click": "_button"} | non-null | null | null | null | 1729794240241 | Field event_type is missing |
{"event_type": null, "event_ts": 1729794954163} | null | non-null | 00 (field exists, is null) | null | null | 1729794954163 | Field event_type is present and is null |
{"event_type": "noop", "event_ts": "2024-10-24"} | null | non-null | null | noop | "2024-10-24" | null | Field event_ts is present but not a timestamp |
{ } | null | non-null | null | null | null | null | Object is present but empty |
| null | 00 (null) | null | | | | | Object/value is null |
| missing | null | null | | | | | Object/value is missing |
INVALID: {"event_type": "login", "event_ts": 1729795057774} | {"event_type": "login"} | non-null | null | login | null | 1729795057774 | INVALID: Shredded field is present in value |
INVALID: {"event_type": "login"} | {"event_type": "login"} | null | | | | | INVALID: Shredded field is present in value, while typed_value is null |
INVALID: "a" | "a" | non-null | null | null | null | null | INVALID: typed_value is present and value is not an object |
INVALID: {} | 02 00 (object with 0 fields) | null | | | | | INVALID: typed_value is null for object |
Invalid cases in the table above must not be produced by writers.
Readers must return an object when typed_value is non-null containing the shredded fields.
Nesting
The typed_value associated with any Variant value field can be any shredded type, as shown in the sections above.
For example, the event object above may also shred sub-fields as object (location) or array (tags).
optional group event (VARIANT) {
required binary metadata;
optional binary value;
optional group typed_value {
required group event_type {
optional binary value;
optional binary typed_value (STRING);
}
required group event_ts {
optional binary value;
optional int64 typed_value (TIMESTAMP(true, MICROS));
}
required group location {
optional binary value;
optional group typed_value {
required group latitude {
optional binary value;
optional double typed_value;
}
required group longitude {
optional binary value;
optional double typed_value;
}
}
}
required group tags {
optional binary value;
optional group typed_value (LIST) {
repeated group list {
required group element {
optional binary value;
optional binary typed_value (STRING);
}
}
}
}
}
}
Data Skipping
Statistics for typed_value columns can be used for file, row group, or page skipping when value is always null (missing).
When the corresponding value column is all nulls, all values must be the shredded typed_value field’s type.
Because the type is known, comparisons with values of that type are valid.
IS NULL/IS NOT NULL and IS NAN/IS NOT NAN filter results are also valid.
Comparisons with values of other types are not necessarily valid and data should not be skipped.
Casting behavior for Variant is delegated to processing engines.
For example, the interpretation of a string as a timestamp may depend on the engine’s SQL session time zone.
Reconstructing a Shredded Variant
It is possible to recover an unshredded Variant value using a recursive algorithm, where the initial call is to construct_variant with the top-level Variant group fields.
def construct_variant(metadata: Metadata, value: Variant, typed_value: Any) -> Variant:
"""Constructs a Variant from value and typed_value"""
if typed_value is not None:
if isinstance(typed_value, dict):
# this is a shredded object
object_fields = {
name: construct_variant(metadata, field.value, field.typed_value)
for (name, field) in typed_value
}
if value is not None:
# this is a partially shredded object
assert isinstance(value, VariantObject), "partially shredded value must be an object"
assert typed_value.keys().isdisjoint(value.keys()), "object keys must be disjoint"
# union the shredded fields and non-shredded fields
return VariantObject(metadata, object_fields).union(VariantObject(metadata, value))
else:
return VariantObject(metadata, object_fields)
elif isinstance(typed_value, list):
# this is a shredded array
assert value is None, "shredded array must not conflict with variant value"
elements = [
construct_variant(metadata, elem.value, elem.typed_value)
for elem in list(typed_value)
]
return VariantArray(metadata, elements)
else:
# this is a shredded primitive
assert value is None, "shredded primitive must not conflict with variant value"
return primitive_to_variant(typed_value)
elif value is not None:
return Variant(metadata, value)
else:
# value is missing
return None
def primitive_to_variant(typed_value: Any): Variant:
if isinstance(typed_value, int):
return VariantInteger(typed_value)
elif isinstance(typed_value, str):
return VariantString(typed_value)
...
Backward and forward compatibility
Shredding is an optional feature of Variant, and readers must continue to be able to read a group containing only value and metadata fields.
Engines that do not write shredded values must be able to read shredded values according to this spec or must fail.
Different files may contain conflicting shredding schemas.
That is, files may contain different typed_value columns for the same Variant with incompatible types.
It may not be possible to infer or specify a single shredded schema that would allow all Parquet files for a table to be read without reconstructing the value as a Variant.
4 -
Variant Binary Encoding
A Variant represents a type that contains one of:
- Primitive: A type and corresponding value (e.g. INT, STRING)
- Array: An ordered list of Variant values
- Object: An unordered collection of string/Variant pairs (i.e. key/value pairs). An object may not contain duplicate keys.
A Variant is encoded with 2 binary values, the value and the metadata.
There are a fixed number of allowed primitive types, provided in the table below.
These represent a commonly supported subset of the logical types allowed by the Parquet format.
The Variant Binary Encoding allows representation of semi-structured data (e.g. JSON) in a form that can be efficiently queried by path.
The design is intended to allow efficient access to nested data even in the presence of very wide or deep structures.
Another motivation for the representation is that (aside from metadata) each nested Variant value is contiguous and self-contained.
For example, in a Variant containing an Array of Variant values, the representation of an inner Variant value, when paired with the metadata of the full variant, is itself a valid Variant.
This document describes the Variant Binary Encoding scheme.
Variant fields can also be shredded.
Shredding refers to extracting some elements of the variant into separate columns for more efficient extraction/filter pushdown.
The Variant Shredding specification describes the details of shredding Variant values as typed Parquet columns.
Variant in Parquet
A Variant value in Parquet is represented by a group with 2 fields, named value and metadata.
- The Variant group must be annotated with the
VARIANT logical type. - Both fields
value and metadata must be of type binary (called BYTE_ARRAY in the Parquet thrift definition). - The
metadata field is required and must be a valid Variant metadata, as defined below. - The
value field must be annotated as required for unshredded Variant values, or optional if parts of the value are shredded as typed Parquet columns. - When present, the
value field must be a valid Variant value, as defined below.
This is the expected unshredded representation in Parquet:
optional group variant_name (VARIANT) {
required binary metadata;
required binary value;
}
This is an example representation of a shredded Variant in Parquet:
optional group shredded_variant_name (VARIANT) {
required binary metadata;
optional binary value;
optional int64 typed_value;
}
The VARIANT annotation places no additional restrictions on the repetition of Variant groups, but repetition may be restricted by containing types (such as MAP and LIST).
The Variant group name is the name of the Variant column.
The encoded metadata always starts with a header byte.
7 6 5 4 3 0
+-------+---+---+---------------+
header | | | | version |
+-------+---+---+---------------+
^ ^
| +-- sorted_strings
+-- offset_size_minus_one
The version is a 4-bit value that must always contain the value 1.
sorted_strings is a 1-bit value indicating whether dictionary strings are sorted and unique.
offset_size_minus_one is a 2-bit value providing the number of bytes per dictionary size and offset field.
The actual number of bytes, offset_size, is offset_size_minus_one + 1.
The entire metadata is encoded as the following diagram shows:
7 0
+-----------------------+
metadata | header |
+-----------------------+
| |
: dictionary_size : <-- unsigned little-endian, `offset_size` bytes
| |
+-----------------------+
| |
: offset : <-- unsigned little-endian, `offset_size` bytes
| |
+-----------------------+
:
+-----------------------+
| |
: offset : <-- unsigned little-endian, `offset_size` bytes
| | (`dictionary_size + 1` offsets)
+-----------------------+
| |
: bytes :
| |
+-----------------------+
The metadata is encoded first with the header byte, then dictionary_size which is an unsigned little-endian value of offset_size bytes, and represents the number of string values in the dictionary.
Next, is an offset list, which contains dictionary_size + 1 values.
Each offset is an unsigned little-endian value of offset_size bytes, and represents the starting byte offset of the i-th string in bytes.
The first offset value will always be 0, and the last offset value will always be the total length of bytes.
The last part of the metadata is bytes, which stores all the string values in the dictionary.
All string values must be UTF-8 encoded strings.
The grammar for encoded metadata is as follows
metadata: <header> <dictionary_size> <dictionary>
header: 1 byte (<version> | <sorted_strings> << 4 | (<offset_size_minus_one> << 6))
version: a 4-bit version ID. Currently, must always contain the value 1
sorted_strings: a 1-bit value indicating whether metadata strings are sorted
offset_size_minus_one: 2-bit value providing the number of bytes per dictionary size and offset field.
dictionary_size: `offset_size` bytes. unsigned little-endian value indicating the number of strings in the dictionary
dictionary: <offset>* <bytes>
offset: `offset_size` bytes. unsigned little-endian value indicating the starting position of the ith string in `bytes`. The list should contain `dictionary_size + 1` values, where the last value is the total length of `bytes`.
bytes: UTF-8 encoded dictionary string values
Notes:
- Offsets are relative to the start of the
bytes array. - The length of the ith string can be computed as
offset[i+1] - offset[i]. - The offset of the first string is always equal to 0 and is therefore redundant. It is included in the spec to simplify in-memory-processing.
offset_size_minus_one indicates the number of bytes per dictionary_size and offset entry. I.e. a value of 0 indicates 1-byte offsets, 1 indicates 2-byte offsets, 2 indicates 3 byte offsets and 3 indicates 4-byte offsets.- If
sorted_strings is set to 1, strings in the dictionary must be unique and sorted in lexicographic order. If the value is set to 0, readers may not make any assumptions about string order or uniqueness.
Value encoding
The entire encoded Variant value includes the value_metadata byte, and then 0 or more bytes for the val.
7 2 1 0
+------------------------------------+------------+
value | value_header | basic_type | <-- value_metadata
+------------------------------------+------------+
| |
: value_data : <-- 0 or more bytes
| |
+-------------------------------------------------+
Basic Type
The basic_type is 2-bit value that represents which basic type the Variant value is.
The basic types table shows what each value represents.
The value_header is a 6-bit value that contains more information about the type, and the format depends on the basic_type.
When basic_type is 0, value_header is a 6-bit primitive_header.
The primitive types table shows what each value represents.
5 0
+-----------------------+
value_header | primitive_header |
+-----------------------+
When basic_type is 1, value_header is a 6-bit short_string_header.
5 0
+-----------------------+
value_header | short_string_header |
+-----------------------+
The short_string_header value is the length of the string.
When basic_type is 2, value_header is made up of field_offset_size_minus_one, field_id_size_minus_one, and is_large.
5 4 3 2 1 0
+---+---+-------+-------+
value_header | | | | |
+---+---+-------+-------+
^ ^ ^
| | +-- field_offset_size_minus_one
| +-- field_id_size_minus_one
+-- is_large
field_offset_size_minus_one and field_id_size_minus_one are 2-bit values that represent the number of bytes used to encode the field offsets and field ids.
The actual number of bytes is computed as field_offset_size_minus_one + 1 and field_id_size_minus_one + 1.
is_large is a 1-bit value that indicates how many bytes are used to encode the number of elements.
If is_large is 0, 1 byte is used, and if is_large is 1, 4 bytes are used.
When basic_type is 3, value_header is made up of field_offset_size_minus_one, and is_large.
5 3 2 1 0
+-----------+---+-------+
value_header | | | |
+-----------+---+-------+
^ ^
| +-- field_offset_size_minus_one
+-- is_large
field_offset_size_minus_one is a 2-bit value that represents the number of bytes used to encode the field offset.
The actual number of bytes is computed as field_offset_size_minus_one + 1.
is_large is a 1-bit value that indicates how many bytes are used to encode the number of elements.
If is_large is 0, 1 byte is used, and if is_large is 1, 4 bytes are used.
Value Data
The value_data encoding format depends on the type specified by value_metadata.
For some types, the value_data will be 0-bytes.
Value Data for Primitive type (basic_type=0)
When basic_type is 0, value_data depends on the primitive_header value.
The primitive types table shows the encoding format for each primitive type.
Value Data for Short string (basic_type=1)
When basic_type is 1, value_data is the sequence of UTF-8 encoded bytes that represents the string.
Value Data for Object (basic_type=2)
When basic_type is 2, value_data encodes an object.
The encoding format is shown in the following diagram:
7 0
+-----------------------+
object value_data | |
: num_elements : <-- unsigned little-endian, 1 or 4 bytes
| |
+-----------------------+
| |
: field_id : <-- unsigned little-endian, `field_id_size` bytes
| |
+-----------------------+
:
+-----------------------+
| |
: field_id : <-- unsigned little-endian, `field_id_size` bytes
| | (`num_elements` field_ids)
+-----------------------+
| |
: field_offset : <-- unsigned little-endian, `field_offset_size` bytes
| |
+-----------------------+
:
+-----------------------+
| |
: field_offset : <-- unsigned little-endian, `field_offset_size` bytes
| | (`num_elements + 1` field_offsets)
+-----------------------+
| |
: value :
| |
+-----------------------+
:
+-----------------------+
| |
: value : <-- (`num_elements` values)
| |
+-----------------------+
An object value_data begins with num_elements, a 1-byte or 4-byte unsigned little-endian value, representing the number of elements in the object.
The size in bytes of num_elements is indicated by is_large in the value_header.
Next, is a list of field_id values.
There are num_elements number of entries and each field_id is an unsigned little-endian value of field_id_size bytes.
A field_id is an index into the dictionary in the metadata.
The field_id list is followed by a field_offset list.
There are num_elements + 1 number of entries and each field_offset is an unsigned little-endian value of field_offset_size bytes.
A field_offset represents the byte offset (relative to the first byte of the first value) where the i-th value starts.
The last field_offset points to the byte after the end of the last value.
The field_offset list is followed by the value list.
There are num_elements number of value entries and each value is an encoded Variant value.
For the i-th key-value pair of the object, the key is the metadata dictionary entry indexed by the i-th field_id, and the value is the Variant value starting from the i-th field_offset byte offset.
The field ids and field offsets must be in lexicographical order of the corresponding field names in the metadata dictionary.
However, the actual value entries do not need to be in any particular order.
This implies that the field_offset values may not be monotonically increasing.
For example, for the following object:
{
"c": 3,
"b": 2,
"a": 1
}
The field_id list must be [<id for key "a">, <id for key "b">, <id for key "c">], in lexicographical order.
The field_offset list must be [<offset for value 1>, <offset for value 2>, <offset for value 3>, <last offset>].
The value list can be in any order.
Value Data for Array (basic_type=3)
When basic_type is 3, value_data encodes an array. The encoding format is shown in the following diagram:
7 0
+-----------------------+
array value_data | |
: num_elements : <-- unsigned little-endian, 1 or 4 bytes
| |
+-----------------------+
| |
: field_offset : <-- unsigned little-endian, `field_offset_size` bytes
| |
+-----------------------+
:
+-----------------------+
| |
: field_offset : <-- unsigned little-endian, `field_offset_size` bytes
| | (`num_elements + 1` field_offsets)
+-----------------------+
| |
: value :
| |
+-----------------------+
:
+-----------------------+
| |
: value : <-- (`num_elements` values)
| |
+-----------------------+
An array value_data begins with num_elements, a 1-byte or 4-byte unsigned little-endian value, representing the number of elements in the array.
The size in bytes of num_elements is indicated by is_large in the value_header.
Next, is a field_offset list.
There are num_elements + 1 number of entries and each field_offset is an unsigned little-endian value of field_offset_size bytes.
A field_offset represents the byte offset (relative to the first byte of the first value) where the i-th value starts.
The last field_offset points to the byte after the last byte of the last value.
The field_offset list is followed by the value list.
There are num_elements number of value entries and each value is an encoded Variant value.
For the i-th array entry, the value is the Variant value starting from the i-th field_offset byte offset.
Value encoding grammar
The grammar for an encoded value is:
value: <value_metadata> <value_data>?
value_metadata: 1 byte (<basic_type> | (<value_header> << 2))
basic_type: ID from Basic Type table. <value_header> must be a corresponding variation
value_header: <primitive_header> | <short_string_header> | <object_header> | <array_header>
primitive_header: ID from Primitive Type table. <val> must be a corresponding variation of <primitive_val>
short_string_header: unsigned string length in bytes from 0 to 63
object_header: (is_large << 4 | field_id_size_minus_one << 2 | field_offset_size_minus_one)
array_header: (is_large << 2 | field_offset_size_minus_one)
value_data: <primitive_val> | <short_string_val> | <object_val> | <array_val>
primitive_val: see table for binary representation
short_string_val: UTF-8 encoded bytes
object_val: <num_elements> <field_id>* <field_offset>* <fields>
array_val: <num_elements> <field_offset>* <fields>
num_elements: a 1 or 4 byte unsigned little-endian value (depending on is_large in <object_header>/<array_header>)
field_id: a 1, 2, 3 or 4 byte unsigned little-endian value (depending on field_id_size_minus_one in <object_header>), indexing into the dictionary
field_offset: a 1, 2, 3 or 4 byte unsigned little-endian value (depending on field_offset_size_minus_one in <object_header>/<array_header>), providing the offset in bytes within fields
fields: <value>*
Each value_data must correspond to the type defined by value_metadata. Boolean and null types do not have a corresponding value_data, since their type defines their value.
Each array_val and object_val must contain exactly num_elements + 1 values for field_offset.
The last entry is the offset that is one byte past the last field (i.e. the total size of all fields in bytes).
All offsets are relative to the first byte of the first field in the object/array.
field_id_size_minus_one and field_offset_size_minus_one indicate the number of bytes per field ID/offset.
For example, a value of 0 indicates 1-byte IDs, 1 indicates 2-byte IDs, 2 indicates 3 byte IDs and 3 indicates 4-byte IDs.
The is_large flag for arrays and objects is used to indicate whether the number of elements is indicated using a one or four byte value.
When more than 255 elements are present, is_large must be set to true.
It is valid for an implementation to use a larger value than necessary for any of these fields (e.g. is_large may be true for an object with less than 256 elements).
The “short string” basic type may be used as an optimization to fold string length into the type byte for strings less than 64 bytes.
It is semantically identical to the “string” primitive type.
The Decimal type contains a scale, but no precision. The implied precision of a decimal value is floor(log_10(val)) + 1.
Encoding types
Variant basic types
| Basic Type | ID | Description |
|---|
| Primitive | 0 | One of the primitive types |
| Short string | 1 | A string with a length less than 64 bytes |
| Object | 2 | A collection of (string-key, variant-value) pairs |
| Array | 3 | An ordered sequence of variant values |
Variant primitive types
| Equivalence Class | Variant Physical Type | Type ID | Equivalent Parquet Type | Binary format |
|---|
| NullType | null | 0 | UNKNOWN | none |
| Boolean | boolean (True) | 1 | BOOLEAN | none |
| Boolean | boolean (False) | 2 | BOOLEAN | none |
| Exact Numeric | int8 | 3 | INT(8, signed) | 1 byte |
| Exact Numeric | int16 | 4 | INT(16, signed) | 2 byte little-endian |
| Exact Numeric | int32 | 5 | INT(32, signed) | 4 byte little-endian |
| Exact Numeric | int64 | 6 | INT(64, signed) | 8 byte little-endian |
| Double | double | 7 | DOUBLE | IEEE little-endian |
| Exact Numeric | decimal4 | 8 | DECIMAL(precision, scale) | 1 byte scale in range [0, 38], followed by little-endian unscaled value (see decimal table) |
| Exact Numeric | decimal8 | 9 | DECIMAL(precision, scale) | 1 byte scale in range [0, 38], followed by little-endian unscaled value (see decimal table) |
| Exact Numeric | decimal16 | 10 | DECIMAL(precision, scale) | 1 byte scale in range [0, 38], followed by little-endian unscaled value (see decimal table) |
| Date | date | 11 | DATE | 4 byte little-endian |
| Timestamp | timestamp | 12 | TIMESTAMP(isAdjustedToUTC=true, MICROS) | 8-byte little-endian |
| TimestampNTZ | timestamp without time zone | 13 | TIMESTAMP(isAdjustedToUTC=false, MICROS) | 8-byte little-endian |
| Float | float | 14 | FLOAT | IEEE little-endian |
| Binary | binary | 15 | BINARY | 4 byte little-endian size, followed by bytes |
| String | string | 16 | STRING | 4 byte little-endian size, followed by UTF-8 encoded bytes |
| TimeNTZ | time without time zone | 17 | TIME(isAdjustedToUTC=false, MICROS) | 8-byte little-endian |
| Timestamp | timestamp with time zone | 18 | TIMESTAMP(isAdjustedToUTC=true, NANOS) | 8-byte little-endian |
| TimestampNTZ | timestamp without time zone | 19 | TIMESTAMP(isAdjustedToUTC=false, NANOS) | 8-byte little-endian |
| UUID | uuid | 20 | UUID | 16-byte big-endian |
The Equivalence Class column indicates logical equivalence of physically encoded types.
For example, a user expression operating on a string value containing “hello” should behave the same, whether it is encoded with the short string optimization, or long string encoding.
Similarly, user expressions operating on an int8 value of 1 should behave the same as a decimal16 with scale 2 and unscaled value 100.
Decimal table
| Decimal Precision | Decimal value type | Variant Physical Type |
|---|
| 1 <= precision <= 9 | int32 | decimal4 |
| 10 <= precision <= 18 | int64 | decimal8 |
| 19 <= precision <= 38 | int128 | decimal16 |
| > 38 | Not supported | |
String values must be UTF-8 encoded
All strings within the Variant binary format must be UTF-8 encoded.
This includes the dictionary key string values, the “short string” values, and the “long string” values.
Object field ID order and uniqueness
For objects, field IDs and offsets must be listed in the order of the corresponding field names, sorted lexicographically (using unsigned byte ordering for UTF-8).
Note that the field values themselves are not required to follow this order.
As a result, offsets will not necessarily be listed in ascending order.
The field values are not required to be in the same order as the field IDs, to enable flexibility when constructing Variant values.
An implementation may rely on this field ID order in searching for field names.
E.g. a binary search on field IDs (combined with metadata lookups) may be used to find a field with a given name.
Field names are case-sensitive.
Field names are required to be unique for each object.
It is an error for an object to contain two fields with the same name, whether or not they have distinct dictionary IDs.
Versions and extensions
An implementation is not expected to parse a Variant value whose metadata version is higher than the version supported by the implementation.
However, new types may be added to the specification without incrementing the version ID.
In such a situation, an implementation should be able to read the rest of the Variant value if desired.
Shredding
A single Variant object may have poor read performance when only a small subset of fields are needed.
A better approach is to create separate columns for individual fields, referred to as shredding or subcolumnarization.
VariantShredding.md describes the Variant shredding specification in Parquet.