-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BigQuery: Increment version to 0.28.0 #4258
Merged
Merged
Changes from all commits
Commits
Show all changes
4 commits
Select commit
Hold shift + click to select a range
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,174 @@ | ||
# Changelog | ||
|
||
## v0.28.0 | ||
|
||
**v0.28.0 significantly changes the interface for this package.** For examples | ||
of the differences between v0.28.0 and previous versions, see [Migrating to | ||
the BigQuery Python client library | ||
v0.28](https://cloud.google.com/bigquery/docs/python-client-migration). | ||
These changes can be summarized as follows: | ||
|
||
- Query and view operations default to the standard SQL dialect. (#4192) | ||
- Client functions related to | ||
[jobs](https://cloud.google.com/bigquery/docs/jobs-overview), like running | ||
queries, immediately start the job. | ||
- Functions to create, get, update, delete datasets and tables moved to the | ||
client class. | ||
|
||
### Fixes | ||
|
||
- Populate timeout parameter correctly for queries (#4209) | ||
- Automatically retry idempotent RPCs (#4148, #4178) | ||
- Parse timestamps in query parameters using canonical format (#3945) | ||
- Parse array parameters that contain a struct type. (#4040) | ||
- Support Sub Second Datetimes in row data (#3901, #3915, #3926), h/t @page1 | ||
|
||
### Interface changes / additions | ||
|
||
- Support external table configuration (#4182) in query jobs (#4191) and | ||
tables (#4193). | ||
- New `Row` class allows for access by integer index like a tuple, string | ||
index like a dictionary, or attribute access like an object. (#4149) | ||
- Add option for job ID generation with user-supplied prefix (#4198) | ||
- Add support for update of dataset access entries (#4197) | ||
- Add support for atomic read-modify-write of a dataset using etag (#4052) | ||
- Add support for labels to `Dataset` (#4026) | ||
- Add support for labels to `Table` (#4207) | ||
- Add `Table.streaming_buffer` property (#4161) | ||
- Add `TableReference` class (#3942) | ||
- Add `DatasetReference` class (#3938, #3942, #3993) | ||
- Add `ExtractJob.destination_uri_file_counts` property. (#3803) | ||
- Add `client.create_rows_json()` to bypass conversions on streaming writes. | ||
(#4189) | ||
- Add `client.get_job()` to get arbitrary jobs. (#3804, #4213) | ||
- Add filter to `client.list_datasets()` (#4205) | ||
- Add `QueryJob.undeclared_query_parameters` property. (#3802) | ||
- Add `QueryJob.referenced_tables` property. (#3801) | ||
- Add new scalar statistics properties to `QueryJob` (#3800) | ||
- Add `QueryJob.query_plan` property. (#3799) | ||
|
||
### Interface changes / breaking changes | ||
|
||
- Remove `client.run_async_query()`, use `client.query()` instead. (#4130) | ||
- Remove `client.run_sync_query()`, use `client.query_rows()` instead. (#4065, #4248) | ||
- Make `QueryResults` read-only. (#4094, #4144) | ||
- Make `get_query_results` private. Return rows for `QueryJob.result()` (#3883) | ||
- Move `*QueryParameter` and `UDFResource` classes to `query` module (also | ||
exposed in `bigquery` module). (#4156) | ||
|
||
#### Changes to tables | ||
|
||
- Remove `client` from `Table` class (#4159) | ||
- Remove `table.exists()` (#4145) | ||
- Move `table.list_parations` to `client.list_partitions` (#4146) | ||
- Move `table.upload_from_file` to `client.load_table_from_file` (#4136) | ||
- Move `table.update()` and `table.patch()` to `client.update_table()` (#4076) | ||
- Move `table.insert_data()` to `client.create_rows()`. Automatically | ||
generates row IDs if not supplied. (#4151, #4173) | ||
- Move `table.fetch_data()` to `client.list_rows()` (#4119, #4143) | ||
- Move `table.delete()` to `client.delete_table()` (#4066) | ||
- Move `table.create()` to `client.create_table()` (#4038, #4043) | ||
- Move `table.reload()` to `client.get_table()` (#4004) | ||
- Rename `Table.name` attribute to `Table.table_id` (#3959) | ||
- `Table` constructor takes a `TableReference` as parameter (#3997) | ||
|
||
#### Changes to datasets | ||
|
||
- Remove `client` from `Dataset` class (#4018) | ||
- Remove `dataset.exists()` (#3996) | ||
- Move `dataset.list_tables()` to `client.list_dataset_tables()` (#4013) | ||
- Move `dataset.delete()` to `client.delete_dataset()` (#4012) | ||
- Move `dataset.patch()` and `dataset.update()` to `client.update_dataset()` (#4003) | ||
- Move `dataset.create()` to `client.create_dataset()` (#3982) | ||
- Move `dataset.reload()` to `client.get_dataset()` (#3973) | ||
- Rename `Dataset.name` attribute to `Dataset.dataset_id` (#3955) | ||
- `client.dataset()` returns a `DatasetReference` instead of `Dataset`. (#3944) | ||
- Rename class: `dataset.AccessGrant -> dataset.AccessEntry`. (#3798) | ||
- `dataset.table()` returns a `TableReference` instead of a `Table` (#4014) | ||
- `Dataset` constructor takes a DatasetReference (#4036) | ||
|
||
#### Changes to jobs | ||
|
||
- Make `job.begin()` method private. (#4242) | ||
- Add `LoadJobConfig` class and modify `LoadJob` (#4103, #4137) | ||
- Add `CopyJobConfig` class and modify `CopyJob` (#4051, #4059) | ||
- Type of Job's and Query's `default_dataset` changed from `Dataset` to | ||
`DatasetReference` (#4037) | ||
- Rename `client.load_table_from_storage()` to `client.load_table_from_uri()` | ||
(#4235) | ||
- Rename `client.extract_table_to_storage` to `client.extract_table()`. | ||
Method starts the extract job immediately. (#3991, #4177) | ||
- Rename `XJob.name` to `XJob.job_id`. (#3962) | ||
- Rename job classes. `LoadTableFromStorageJob -> LoadJob` and | ||
`ExtractTableToStorageJob -> jobs.ExtractJob` (#3797) | ||
|
||
### Dependencies | ||
|
||
- Updating to `google-cloud-core ~= 0.28`, in particular, the | ||
`google-api-core` package has been moved out of `google-cloud-core`. (#4221) | ||
|
||
PyPI: https://pypi.org/project/google-cloud-bigquery/0.28.0/ | ||
|
||
|
||
## v0.27.0 | ||
|
||
- Remove client-side enum validation. (#3735) | ||
- Add `Table.row_from_mapping` helper. (#3425) | ||
- Move `google.cloud.future` to `google.api.core` (#3764) | ||
- Fix `__eq__` and `__ne__`. (#3765) | ||
- Move `google.cloud.iterator` to `google.api.core.page_iterator` (#3770) | ||
- `nullMarker` support for BigQuery Load Jobs (#3777), h/t @leondealmeida | ||
- Allow `job_id` to be explicitly specified in DB-API. (#3779) | ||
- Add support for a custom null marker. (#3776) | ||
- Add `SchemaField` serialization and deserialization. (#3786) | ||
- Add `get_query_results` method to the client. (#3838) | ||
- Poll for query completion via `getQueryResults` method. (#3844) | ||
- Allow fetching more than the first page when `max_results` is set. (#3845) | ||
|
||
PyPI: https://pypi.org/project/google-cloud-bigquery/0.27.0/ | ||
|
||
## 0.26.0 | ||
|
||
### Notable implementation changes | ||
|
||
- Using the `requests` transport attached to a Client for for resumable media | ||
(i.e. downloads and uploads) (#3705) (this relates to the `httplib2` to | ||
`requests` switch) | ||
|
||
### Interface changes / additions | ||
|
||
- Adding `autodetect` property on `LoadTableFromStorageJob` to enable schema | ||
autodetection. (#3648) | ||
- Implementing the Python Futures interface for Jobs. Call `job.result()` to | ||
wait for jobs to complete instead of polling manually on the job status. | ||
(#3626) | ||
- Adding `is_nullable` property on `SchemaField`. Can be used to check if a | ||
column is nullable. (#3620) | ||
- `job_name` argument added to `Table.upload_from_file` for setting the job | ||
ID. (#3605) | ||
- Adding `google.cloud.bigquery.dbapi` package, which implements PEP-249 | ||
DB-API specification. (#2921) | ||
- Adding `Table.view_use_legacy_sql` property. Can be used to create views | ||
with legacy or standard SQL. (#3514) | ||
|
||
### Interface changes / breaking changes | ||
|
||
- Removing `results()` method from the `QueryJob` class. Use | ||
`query_results()` instead. (#3661) | ||
- `SchemaField` is now immutable. It is also hashable so that it can be used | ||
in sets. (#3601) | ||
|
||
### Dependencies | ||
|
||
- Updating to `google-cloud-core ~= 0.26`, in particular, the underlying HTTP | ||
transport switched from `httplib2` to `requests` (#3654, #3674) | ||
- Adding dependency on `google-resumable-media` for loading BigQuery tables | ||
from local files. (#3555) | ||
|
||
### Packaging | ||
|
||
- Fix inclusion of `tests` (vs. `unit_tests`) in `MANIFEST.in` (#3552) | ||
- Updating `author_email` in `setup.py` to `googleapis-publisher@google.com`. | ||
(#3598) | ||
|
||
PyPI: https://pypi.org/project/google-cloud-bigquery/0.26.0/ |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This comment was marked as spam.
Sorry, something went wrong.