BigQuery Connector
The BigQuery Storage API and this connector are in Beta and are subject to change.
Changes may include, but are not limited to:
Type conversion
Partitioning
BigQuery Storage API
The Storage API streams data in parallel directly from BigQuery via gRPC without using Google Cloud Storage as an intermediary. It has a number of advantages over using the previous export-based read flow that should generally lead to better read performance:
Direct Streaming
Column Filtering
Dynamic Sharding
Follow .
Authentication
On GCE/Dataproc the authentication is taken from the machine’s role.
Outside GCE/Dataproc you have 3 options:
Use a service account JSON key and as described .
Set
bigquery.credentials
in the catalog properties file. It should contain the contents of the JSON file, encoded using base64.Set
bigquery.credentials-file
in the catalog properties file. It should point to the location of the JSON file.
Configuration
To configure the BigQuery connector, create a catalog properties file in etc/catalog
named, for example, bigquery.properties
, to mount the BigQuery connector as the bigquery
catalog. Create the file with the following contents, replacing the connection properties as appropriate for your setup:
The BigQuery connector can only access a single GCP project.Thus, if you have data in multiple GCP projects, You need to create several catalogs, each pointingto a different GCP project. For example, if you have two GCP projects, one for the sales and one for analytics, you can create two properties files in etc/catalog
named sales.properties
and analytics.properties
, both having connector.name=bigquery
but with different project-id
. This will create the two catalogs, sales
and analytics
respectively.
Configuring Partitioning
By default the connector creates one partition per 400MB in the table being read (before filtering). This should roughly correspond to the maximum number of readers supported by the BigQuery Storage API. This can be configured explicitly with the bigquery.parallelism
property. BigQuery may limit the number of partitions based on server constraints.
The connector has a preliminary support for reading from BigQuery views. Please note there are a few caveats:
BigQuery views are not materialized by default, which means that the connector needs to materialize them before it can read them. This process affects the read performance.
The materialization process can also incur additional costs to your BigQuery bill.
By default, the materialized views are created in the same project and dataset. Those can be configured by the optional
bigquery.view-materialization-project
andbigquery.view-materialization-dataset
properties, respectively. The service account must have write permission to the project and the dataset in order to materialize the view.Reading from views is disabled by default. In order to enable it, set the
bigquery.views-enabled
configuration property to .
Configuration Properties
All configuration properties are optional.
With a few exceptions, all BigQuery types are mapped directly to their Presto counterparts. Here are all the mappings:
BigQuery | Notes | |
---|---|---|
|
| |
|
| |
| ||
|
| |
|
| |
|
| In Well-known text (WKT) format |
|
| |
|
| |
|
| |
|
| |
|
| Time zone is UTC |
| Time zone is UTC |
FAQ
See the BigQuery pricing documentation.