Skip to content

[Documentation Issue]  #1203

Closed
Closed
@raybuhr

Description

@raybuhr

Page Name: bigquery-usage
Release: 0.8.0

Looking at the section for inserting data asynchronously, the load_from_storage method doesn't exist in the table class. There is a load_table_from_storage method in client class but it doesn't have the same args. Is this feature not yet released or just improperly documented?

Inserting data (asynchronous)
Start a job loading data asynchronously from a set of CSV files, located on Google Cloud Storage, appending rows into an existing table. First, create the job locally:

>>> from gcloud import bigquery
>>> client = bigquery.Client()
>>> table = dataset.table(name='person_ages')
>>> job = table.load_from_storage(bucket_name='bucket-name',
...                               object_name_glob='object-prefix*',
...                               source_format='CSV',
...                               skip_leading_rows=1,
...                               write_disposition='truncate')
>>> job.job_id
'e3344fba-09df-4ae0-8337-fddee34b3840'
>>> job.type
'load'
>>> job.created
None
>>> job.state
None

Metadata

Metadata

Assignees

Labels

api: bigqueryIssues related to the BigQuery API.type: bugError or flaw in code with unintended results or allowing sub-optimal usage patterns.

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions