-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Minor: Document parquet_metadata
function
#8852
Changes from 1 commit
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -191,7 +191,7 @@ DataFusion CLI v16.0.0 | |
2 rows in set. Query took 0.007 seconds. | ||
``` | ||
|
||
## Creating external tables | ||
## Creating External Tables | ||
|
||
It is also possible to create a table backed by files by explicitly | ||
via `CREATE EXTERNAL TABLE` as shown below. Filemask wildcards supported | ||
|
@@ -425,6 +425,13 @@ Available commands inside DataFusion CLI are: | |
> \h function | ||
``` | ||
|
||
## Supported SQL | ||
|
||
In addition to the normal [SQL supported in DataFusion], `datatfusion-cli` also | ||
alamb marked this conversation as resolved.
Show resolved
Hide resolved
|
||
supports additional statements and commands: | ||
|
||
[sql supported in datafusion]: sql/index.rst | ||
|
||
- Show configuration options | ||
|
||
`SHOW ALL [VERBOSE]` | ||
|
@@ -467,6 +474,66 @@ Available commands inside DataFusion CLI are: | |
> SET datafusion.execution.batch_size to 1024; | ||
``` | ||
|
||
- `parquet_metadata` table function | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. |
||
|
||
The `parquet_metadata` table function can be used to inspect detailed metadata | ||
about a parquet file such as statistics, sizes, and other information. This can | ||
be helpful to understand how parquet files are structured. | ||
|
||
For example, to see information about the `"WatchID"` column in the | ||
`hits.parquet` file, you can use: | ||
|
||
```sql | ||
SELECT path_in_schema, row_group_id, row_group_num_rows, stats_min, stats_max, total_compressed_size | ||
FROM parquet_metadata('hits.parquet') | ||
WHERE path_in_schema = '"WatchID"' | ||
LIMIT 3; | ||
|
||
+----------------+--------------+--------------------+---------------------+---------------------+-----------------------+ | ||
| path_in_schema | row_group_id | row_group_num_rows | stats_min | stats_max | total_compressed_size | | ||
+----------------+--------------+--------------------+---------------------+---------------------+-----------------------+ | ||
| "WatchID" | 0 | 450560 | 4611687214012840539 | 9223369186199968220 | 3883759 | | ||
| "WatchID" | 1 | 612174 | 4611689135232456464 | 9223371478009085789 | 5176803 | | ||
| "WatchID" | 2 | 344064 | 4611692774829951781 | 9223363791697310021 | 3031680 | | ||
+----------------+--------------+--------------------+---------------------+---------------------+-----------------------+ | ||
3 rows in set. Query took 0.053 seconds. | ||
``` | ||
|
||
The returned table has the following columns for each row for each column chunk | ||
in the file. Please refer to the [Parquet Documentation] for more information. | ||
|
||
[parquet documentation]: https://parquet.apache.org/ | ||
|
||
| column_name | data_type | Description | | ||
| ----------------------- | --------- | --------------------------------------------------------------------------------------------------- | | ||
| filename | Utf8 | Name of the file | | ||
| row_group_id | Int64 | Row group index the column chunk belongs to | | ||
| row_group_num_rows | Int64 | Count of rows stored in the row group | | ||
| row_group_num_columns | Int64 | Total number of columns in the row group (same for all row groups) | | ||
| row_group_bytes | Int64 | Number of bytes used to store the row group (not including metadata) | | ||
| column_id | Int64 | ID of the column | | ||
| file_offset | Int64 | Offset within the file that this column chunk's data begins | | ||
| num_values | Int64 | Total number of values in this column chunk | | ||
| path_in_schema | Utf8 | "Path" (column name) of the column chunk in the schema | | ||
| type | Utf8 | Parquet data type of the column chunk | | ||
| stats_min | Utf8 | The minimum value for this column chunk, if stored in the statistics, cast to a string | | ||
| stats_max | Utf8 | The minimum value for this column chunk, if stored in the statistics, cast to a string | | ||
alamb marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| stats_null_count | Int64 | Number of null values in this column chunk, if stored in the statistics | | ||
| stats_distinct_count | Int64 | Number of distinct values in this column chunk, if stored in the statistics | | ||
| stats_min_value | Utf8 | Same as `stats_min` | | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. wondering if this duplicated fields needed? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I don't honestly know why the seemingly duplicated columns are present. It was done initially to mirror duckdb which has them. Maybe we should investigate the reason why 🤔 D create table foo as select * from parquet_metadata('./benchmarks/data/hits.parquet');
D describe table foo;
┌─────────────────────────┬─────────────┬─────────┬─────────┬─────────┬─────────┐
│ column_name │ column_type │ null │ key │ default │ extra │
│ varchar │ varchar │ varchar │ varchar │ varchar │ varchar │
├─────────────────────────┼─────────────┼─────────┼─────────┼─────────┼─────────┤
│ file_name │ VARCHAR │ YES │ │ │ │
│ row_group_id │ BIGINT │ YES │ │ │ │
│ row_group_num_rows │ BIGINT │ YES │ │ │ │
│ row_group_num_columns │ BIGINT │ YES │ │ │ │
│ row_group_bytes │ BIGINT │ YES │ │ │ │
│ column_id │ BIGINT │ YES │ │ │ │
│ file_offset │ BIGINT │ YES │ │ │ │
│ num_values │ BIGINT │ YES │ │ │ │
│ path_in_schema │ VARCHAR │ YES │ │ │ │
│ type │ VARCHAR │ YES │ │ │ │
│ stats_min │ VARCHAR │ YES │ │ │ │
│ stats_max │ VARCHAR │ YES │ │ │ │
│ stats_null_count │ BIGINT │ YES │ │ │ │
│ stats_distinct_count │ BIGINT │ YES │ │ │ │
│ stats_min_value │ VARCHAR │ YES │ │ │ │
│ stats_max_value │ VARCHAR │ YES │ │ │ │
│ compression │ VARCHAR │ YES │ │ │ │
│ encodings │ VARCHAR │ YES │ │ │ │
│ index_page_offset │ BIGINT │ YES │ │ │ │
│ dictionary_page_offset │ BIGINT │ YES │ │ │ │
│ data_page_offset │ BIGINT │ YES │ │ │ │
│ total_compressed_size │ BIGINT │ YES │ │ │ │
│ total_uncompressed_size │ BIGINT │ YES │ │ │ │
├─────────────────────────┴─────────────┴─────────┴─────────┴─────────┴─────────┤
│ 23 rows 6 columns │
└───────────────────────────────────────────────────────────────────────────────┘ |
||
| stats_max_value | Utf8 | Same as `stats_max` | | ||
| compression | Utf8 | Block level compression (e.g. `SNAPPY`) used for this column chunk | | ||
| encodings | Utf8 | All block level encodings (e.g. `[PLAIN_DICTIONARY, PLAIN, RLE]`) used for this column chunk | | ||
| index_page_offset | Int64 | Offset in the file of the [`page index`], if any | | ||
| dictionary_page_offset | Int64 | Offset in the file of the dictionary page, if any | | ||
| data_page_offset | Int64 | Offset in the file of the first data page, if any | | ||
| total_compressed_size | Int64 | Number of bytes the column chunk's data after encoding and compression (what is stored in the file) | | ||
| total_uncompressed_size | Int64 | Number of bytes the column chunk's data after encoding | | ||
|
||
+-------------------------+-----------+-------------+ | ||
|
||
[`page index`]: https://github.com/apache/parquet-format/blob/master/PageIndex.md | ||
|
||
## Changing Configuration Options | ||
|
||
All available configuration options can be seen using `SHOW ALL` as described above. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
drive by cleanup -- the other headings are capitalized so it seemed strange that this one was not