This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Overview

All about Parquet.

Apache Parquet is an open source, column-oriented data file format designed for efficient data storage and retrieval. It provides high performance compression and encoding schemes to handle complex data in bulk and is supported in many programming language and analytics tools.

parquet-format (Specification)

The parquet-format repository hosts the official specification of the Parquet file format, defining how data is structured and stored. This specification, along with the parquet.thrift Thrift metadata definitions, is necessary for developing software to effectively read and write Parquet files.

Note that the parquet-format repository does not contain source code for libraries to read or write Parquet files, but rather the formal definitions and documentation of the file format itself.

parquet-java

The parquet-java (formerly named parquet-mr) repository is part of the Apache Parquet project and contains:

  • Java libraries to read and write Parquet files in Java applications.
  • Utilities and APIs for working with Parquet files, including tools for data import/export, schema management, and data conversion.

Note that there are a number of other implementations of the Parquet format, some of which are listed below.

Other Clients / Libraries / Tools

The Parquet ecosystem is rich and varied, encompassing a wide array of tools, libraries, and clients, each offering different levels of feature support. It’s important to note that not all implementations support the same features of the Parquet format. When integrating multiple Parquet implementations within your workflow, it is crucial to conduct thorough testing to ensure compatibility and performance across different platforms and tools.

You can find more information about the feature support of various Parquet implementations on the implementation status page.

Here is a non-exhaustive list of open source Parquet implementations:

1 - Motivation

We created Parquet to make the advantages of compressed, efficient columnar data representation available to any project in the Hadoop ecosystem.

Parquet is built from the ground up with complex nested data structures in mind, and uses the record shredding and assembly algorithm described in the Dremel paper. We believe this approach is superior to simple flattening of nested name spaces.

Parquet is built to support very efficient compression and encoding schemes. Multiple projects have demonstrated the performance impact of applying the right compression and encoding scheme to the data. Parquet allows compression schemes to be specified on a per-column level, and is future-proofed to allow adding more encodings as they are invented and implemented.

Parquet is built to be used by anyone. The Hadoop ecosystem is rich with data processing frameworks, and we are not interested in playing favorites. We believe that an efficient, well-implemented columnar storage substrate should be useful to all frameworks without the cost of extensive and difficult to set up dependencies.