bigquery - ActiveState ActiveGo 1.8
...

Package bigquery

import "cloud.google.com/go/bigquery"
Overview
Index
Examples

Overview ▾

Package bigquery provides a client for the BigQuery service.

Note: This package is in beta. Some backwards-incompatible changes may occur.

The following assumes a basic familiarity with BigQuery concepts. See https://cloud.google.com/bigquery/docs.

Creating a Client

To start working with this package, create a client:

ctx := context.Background()
client, err := bigquery.NewClient(ctx, projectID)
if err != nil {
    // TODO: Handle error.
}

Querying

To query existing tables, create a Query and call its Read method:

q := client.Query(`
    SELECT year, SUM(number) as num
    FROM [bigquery-public-data:usa_names.usa_1910_2013]
    WHERE name = "William"
    GROUP BY year
    ORDER BY year
`)
it, err := q.Read(ctx)
if err != nil {
    // TODO: Handle error.
}

Then iterate through the resulting rows. You can store a row using anything that implements the ValueLoader interface, or with a slice or map of bigquery.Value. A slice is simplest:

for {
    var values []bigquery.Value
    err := it.Next(&values)
    if err == iterator.Done {
        break
    }
    if err != nil {
        // TODO: Handle error.
    }
    fmt.Println(values)
}

You can also use a struct whose exported fields match the query:

type Count struct {
    Year int
    Num  int
}
for {
    var c Count
    err := it.Next(&c)
    if err == iterator.Done {
        break
    }
    if err != nil {
        // TODO: Handle error.
    }
    fmt.Println(c)
}

You can also start the query running and get the results later. Create the query as above, but call Run instead of Read. This returns a Job, which represents an asychronous operation.

job, err := q.Run(ctx)
if err != nil {
    // TODO: Handle error.
}

Get the job's ID, a printable string. You can save this string to retrieve the results at a later time, even in another process.

jobID := job.ID()
fmt.Printf("The job ID is %s\n", jobID)

To retrieve the job's results from the ID, first look up the Job:

job, err = client.JobFromID(ctx, jobID)
if err != nil {
    // TODO: Handle error.
}

Use the Job.Read method to obtain an iterator, and loop over the rows. Query.Read is just a convenience method that combines Query.Run and Job.Read.

it, err = job.Read(ctx)
if err != nil {
    // TODO: Handle error.
}
// Proceed with iteration as above.

Datasets and Tables

You can refer to datasets in the client's project with the Dataset method, and in other projects with the DatasetInProject method:

myDataset := client.Dataset("my_dataset")
yourDataset := client.DatasetInProject("your-project-id", "your_dataset")

These methods create references to datasets, not the datasets themselves. You can have a dataset reference even if the dataset doesn't exist yet. Use Dataset.Create to create a dataset from a reference:

if err := myDataset.Create(ctx); err != nil {
    // TODO: Handle error.
}

You can refer to tables with Dataset.Table. Like bigquery.Dataset, bigquery.Table is a reference to an object in BigQuery that may or may not exist.

table := myDataset.Table("my_table")

You can create, delete and update the metadata of tables with methods on Table. Table.Create supports a few options. For instance, you could create a temporary table with:

err = myDataset.Table("temp").Create(ctx, bigquery.TableExpiration(time.Now().Add(1*time.Hour)))
if err != nil {
    // TODO: Handle error.
}

We'll see how to create a table with a schema in the next section.

Schemas

There are two ways to construct schemas with this package. You can build a schema by hand, like so:

schema1 := bigquery.Schema{
    &bigquery.FieldSchema{Name: "Name", Required: true, Type: bigquery.StringFieldType},
    &bigquery.FieldSchema{Name: "Grades", Repeated: true, Type: bigquery.IntegerFieldType},
}

Or you can infer the schema from a struct:

type student struct {
    Name   string
    Grades []int
}
schema2, err := bigquery.InferSchema(student{})
if err != nil {
    // TODO: Handle error.
}
// schema1 and schema2 are identical.

Struct inference supports tags like those of the encoding/json package, so you can change names or ignore fields:

type student2 struct {
    Name   string `bigquery:"full_name"`
    Grades []int
    Secret string `bigquery:"-"`
}
schema3, err := bigquery.InferSchema(student2{})
if err != nil {
    // TODO: Handle error.
}
// schema3 has fields "full_name" and "Grade".

Having constructed a schema, you can pass it to Table.Create as an option:

if err := table.Create(ctx, schema1); err != nil {
    // TODO: Handle error.
}

Copying

You can copy one or more tables to another table. Begin by constructing a Copier describing the copy. Then set any desired copy options, and finally call Run to get a Job:

copier := myDataset.Table("dest").CopierFrom(myDataset.Table("src"))
copier.WriteDisposition = bigquery.WriteTruncate
job, err = copier.Run(ctx)
if err != nil {
    // TODO: Handle error.
}

You can chain the call to Run if you don't want to set options:

job, err = myDataset.Table("dest").CopierFrom(myDataset.Table("src")).Run(ctx)
if err != nil {
    // TODO: Handle error.
}

You can wait for your job to complete:

status, err := job.Wait(ctx)
if err != nil {
    // TODO: Handle error.
}

Job.Wait polls with exponential backoff. You can also poll yourself, if you wish:

for {
    status, err := job.Status(ctx)
    if err != nil {
        // TODO: Handle error.
    }
    if status.Done() {
        if status.Err() != nil {
            log.Fatalf("Job failed with error %v", status.Err())
        }
        break
    }
    time.Sleep(pollInterval)
}

Loading and Uploading

There are two ways to populate a table with this package: load the data from a Google Cloud Storage object, or upload rows directly from your program.

For loading, first create a GCSReference, configuring it if desired. Then make a Loader, optionally configure it as well, and call its Run method.

gcsRef := bigquery.NewGCSReference("gs://my-bucket/my-object")
gcsRef.AllowJaggedRows = true
loader := myDataset.Table("dest").LoaderFrom(gcsRef)
loader.CreateDisposition = bigquery.CreateNever
job, err = loader.Run(ctx)
// Poll the job for completion if desired, as above.

To upload, first define a type that implements the ValueSaver interface, which has a single method named Save. Then create an Uploader, and call its Put method with a slice of values.

u := table.Uploader()
// Item implements the ValueSaver interface.
items := []*Item{
    {Name: "n1", Size: 32.6, Count: 7},
    {Name: "n2", Size: 4, Count: 2},
    {Name: "n3", Size: 101.5, Count: 1},
}
if err := u.Put(ctx, items); err != nil {
    // TODO: Handle error.
}

You can also upload a struct that doesn't implement ValueSaver. Use the StructSaver type to specify the schema and insert ID by hand, or just supply the struct or struct pointer directly and the schema will be inferred:

type Item2 struct {
    Name  string
    Size  float64
    Count int
}
// Item implements the ValueSaver interface.
items2 := []*Item2{
    {Name: "n1", Size: 32.6, Count: 7},
    {Name: "n2", Size: 4, Count: 2},
    {Name: "n3", Size: 101.5, Count: 1},
}
if err := u.Put(ctx, items2); err != nil {
    // TODO: Handle error.
}

Extracting

If you've been following so far, extracting data from a BigQuery table into a Google Cloud Storage object will feel familiar. First create an Extractor, then optionally configure it, and lastly call its Run method.

extractor := table.ExtractorTo(gcsRef)
extractor.DisableHeader = true
job, err = extractor.Run(ctx)
// Poll the job for completion if desired, as above.

Authentication

See examples of authorization and authentication at https://godoc.org/cloud.google.com/go#pkg-examples.

Index ▾

Constants
type Client
    func NewClient(ctx context.Context, projectID string, opts ...option.ClientOption) (*Client, error)
    func (c *Client) Close() error
    func (c *Client) Dataset(id string) *Dataset
    func (c *Client) DatasetInProject(projectID, datasetID string) *Dataset
    func (c *Client) Datasets(ctx context.Context) *DatasetIterator
    func (c *Client) DatasetsInProject(ctx context.Context, projectID string) *DatasetIterator
    func (c *Client) JobFromID(ctx context.Context, id string) (*Job, error)
    func (c *Client) Query(q string) *Query
type Compression
type Copier
    func (c *Copier) Run(ctx context.Context) (*Job, error)
type CopyConfig
type CreateTableOption
    func TableExpiration(exp time.Time) CreateTableOption
    func UseStandardSQL() CreateTableOption
    func ViewQuery(query string) CreateTableOption
type DataFormat
type Dataset
    func (d *Dataset) Create(ctx context.Context) error
    func (d *Dataset) Delete(ctx context.Context) error
    func (d *Dataset) Metadata(ctx context.Context) (*DatasetMetadata, error)
    func (d *Dataset) Table(tableID string) *Table
    func (d *Dataset) Tables(ctx context.Context) *TableIterator
type DatasetIterator
    func (it *DatasetIterator) Next() (*Dataset, error)
    func (it *DatasetIterator) PageInfo() *iterator.PageInfo
type DatasetMetadata
type Encoding
type Error
    func (e Error) Error() string
type ExplainQueryStage
type ExplainQueryStep
type ExternalData
type ExtractConfig
type ExtractStatistics
type Extractor
    func (e *Extractor) Run(ctx context.Context) (*Job, error)
type FieldSchema
type FieldType
type FileConfig
type GCSReference
    func NewGCSReference(uri ...string) *GCSReference
type Job
    func (j *Job) Cancel(ctx context.Context) error
    func (j *Job) ID() string
    func (j *Job) Read(ctx context.Context) (*RowIterator, error)
    func (j *Job) Status(ctx context.Context) (*JobStatus, error)
    func (j *Job) Wait(ctx context.Context) (*JobStatus, error)
type JobStatistics
type JobStatus
    func (s *JobStatus) Done() bool
    func (s *JobStatus) Err() error
type LoadConfig
type LoadSource
type LoadStatistics
type Loader
    func (l *Loader) Run(ctx context.Context) (*Job, error)
type MultiError
    func (m MultiError) Error() string
type PutMultiError
    func (pme PutMultiError) Error() string
type Query
    func (q *Query) Read(ctx context.Context) (*RowIterator, error)
    func (q *Query) Run(ctx context.Context) (*Job, error)
type QueryConfig
type QueryParameter
type QueryPriority
type QueryStatistics
type ReaderSource
    func NewReaderSource(r io.Reader) *ReaderSource
type RowInsertionError
    func (e *RowInsertionError) Error() string
type RowIterator
    func (it *RowIterator) Next(dst interface{}) error
    func (it *RowIterator) PageInfo() *iterator.PageInfo
type Schema
    func InferSchema(st interface{}) (Schema, error)
type State
type Statistics
type StreamingBuffer
type StructSaver
    func (ss *StructSaver) Save() (row map[string]Value, insertID string, err error)
type Table
    func (t *Table) CopierFrom(srcs ...*Table) *Copier
    func (t *Table) Create(ctx context.Context, options ...CreateTableOption) error
    func (t *Table) Delete(ctx context.Context) error
    func (t *Table) ExtractorTo(dst *GCSReference) *Extractor
    func (t *Table) FullyQualifiedName() string
    func (t *Table) LoaderFrom(src LoadSource) *Loader
    func (t *Table) Metadata(ctx context.Context) (*TableMetadata, error)
    func (t *Table) Read(ctx context.Context) *RowIterator
    func (t *Table) Update(ctx context.Context, tm TableMetadataToUpdate) (*TableMetadata, error)
    func (t *Table) Uploader() *Uploader
type TableCreateDisposition
type TableIterator
    func (it *TableIterator) Next() (*Table, error)
    func (it *TableIterator) PageInfo() *iterator.PageInfo
type TableMetadata
type TableMetadataToUpdate
type TableType
type TableWriteDisposition
type TimePartitioning
type Uploader
    func (u *Uploader) Put(ctx context.Context, src interface{}) error
type Value
type ValueLoader
type ValueSaver
type ValuesSaver
    func (vls *ValuesSaver) Save() (map[string]Value, string, error)

Package files

bigquery.go copy.go dataset.go doc.go error.go extract.go file.go gcs.go iterator.go job.go load.go params.go query.go schema.go service.go table.go uploader.go value.go

Constants

const Scope = "https://www.googleapis.com/auth/bigquery"

type Client

Client may be used to perform BigQuery operations.

type Client struct {
    // contains filtered or unexported fields
}

func NewClient

func NewClient(ctx context.Context, projectID string, opts ...option.ClientOption) (*Client, error)

NewClient constructs a new Client which can perform BigQuery operations. Operations performed via the client are billed to the specified GCP project.

Example

Code:

ctx := context.Background()
client, err := bigquery.NewClient(ctx, "project-id")
if err != nil {
    // TODO: Handle error.
}
_ = client // TODO: Use client.

func (*Client) Close

func (c *Client) Close() error

Close closes any resources held by the client. Close should be called when the client is no longer needed. It need not be called at program exit.

func (*Client) Dataset

func (c *Client) Dataset(id string) *Dataset

Dataset creates a handle to a BigQuery dataset in the client's project.

Example

Code:

ctx := context.Background()
client, err := bigquery.NewClient(ctx, "project-id")
if err != nil {
    // TODO: Handle error.
}
ds := client.Dataset("my_dataset")
fmt.Println(ds)

func (*Client) DatasetInProject

func (c *Client) DatasetInProject(projectID, datasetID string) *Dataset

DatasetInProject creates a handle to a BigQuery dataset in the specified project.

Example

Code:

ctx := context.Background()
client, err := bigquery.NewClient(ctx, "project-id")
if err != nil {
    // TODO: Handle error.
}
ds := client.DatasetInProject("their-project-id", "their-dataset")
fmt.Println(ds)

func (*Client) Datasets

func (c *Client) Datasets(ctx context.Context) *DatasetIterator

Datasets returns an iterator over the datasets in the Client's project.

Example

Code:

ctx := context.Background()
client, err := bigquery.NewClient(ctx, "project-id")
if err != nil {
    // TODO: Handle error.
}
it := client.Datasets(ctx)
_ = it // TODO: iterate using Next or iterator.Pager.

func (*Client) DatasetsInProject

func (c *Client) DatasetsInProject(ctx context.Context, projectID string) *DatasetIterator

DatasetsInProject returns an iterator over the datasets in the provided project.

Example

Code:

ctx := context.Background()
client, err := bigquery.NewClient(ctx, "project-id")
if err != nil {
    // TODO: Handle error.
}
it := client.DatasetsInProject(ctx, "their-project-id")
_ = it // TODO: iterate using Next or iterator.Pager.

func (*Client) JobFromID

func (c *Client) JobFromID(ctx context.Context, id string) (*Job, error)

JobFromID creates a Job which refers to an existing BigQuery job. The job need not have been created by this package. For example, the job may have been created in the BigQuery console.

Example

Code:

ctx := context.Background()
client, err := bigquery.NewClient(ctx, "project-id")
if err != nil {
    // TODO: Handle error.
}
jobID := getJobID() // Get a job ID using Job.ID, the console or elsewhere.
job, err := client.JobFromID(ctx, jobID)
if err != nil {
    // TODO: Handle error.
}
fmt.Println(job)

func (*Client) Query

func (c *Client) Query(q string) *Query

Query creates a query with string q. The returned Query may optionally be further configured before its Run method is called.

Example

Code:

ctx := context.Background()
client, err := bigquery.NewClient(ctx, "project-id")
if err != nil {
    // TODO: Handle error.
}
q := client.Query("select name, num from t1")
q.DefaultProjectID = "project-id"
// TODO: set other options on the Query.
// TODO: Call Query.Run or Query.Read.

Example (Parameters)

Code:

ctx := context.Background()
client, err := bigquery.NewClient(ctx, "project-id")
if err != nil {
    // TODO: Handle error.
}
q := client.Query("select num from t1 where name = @user")
q.Parameters = []bigquery.QueryParameter{
    {Name: "user", Value: "Elizabeth"},
}
// TODO: set other options on the Query.
// TODO: Call Query.Run or Query.Read.

type Compression

Compression is the type of compression to apply when writing data to Google Cloud Storage.

type Compression string
const (
    None Compression = "NONE"
    Gzip Compression = "GZIP"
)

type Copier

A Copier copies data into a BigQuery table from one or more BigQuery tables.

type Copier struct {
    CopyConfig
    // contains filtered or unexported fields
}

func (*Copier) Run

func (c *Copier) Run(ctx context.Context) (*Job, error)

Run initiates a copy job.

type CopyConfig

CopyConfig holds the configuration for a copy job.

type CopyConfig struct {
    // JobID is the ID to use for the copy job. If unset, a job ID will be automatically created.
    JobID string

    // Srcs are the tables from which data will be copied.
    Srcs []*Table

    // Dst is the table into which the data will be copied.
    Dst *Table

    // CreateDisposition specifies the circumstances under which the destination table will be created.
    // The default is CreateIfNeeded.
    CreateDisposition TableCreateDisposition

    // WriteDisposition specifies how existing data in the destination table is treated.
    // The default is WriteAppend.
    WriteDisposition TableWriteDisposition
}

type CreateTableOption

A CreateTableOption is an optional argument to CreateTable.

type CreateTableOption interface {
    // contains filtered or unexported methods
}

func TableExpiration

func TableExpiration(exp time.Time) CreateTableOption

TableExpiration returns a CreateTableOption that will cause the created table to be deleted after the expiration time.

func UseStandardSQL

func UseStandardSQL() CreateTableOption

UseStandardSQL returns a CreateTableOption to set the table to use standard SQL. The default setting is false (using legacy SQL).

func ViewQuery

func ViewQuery(query string) CreateTableOption

ViewQuery returns a CreateTableOption that causes the created table to be a virtual table defined by the supplied query. For more information see: https://cloud.google.com/bigquery/querying-data#views

type DataFormat

DataFormat describes the format of BigQuery table data.

type DataFormat string

Constants describing the format of BigQuery table data.

const (
    CSV             DataFormat = "CSV"
    Avro            DataFormat = "AVRO"
    JSON            DataFormat = "NEWLINE_DELIMITED_JSON"
    DatastoreBackup DataFormat = "DATASTORE_BACKUP"
)

type Dataset

Dataset is a reference to a BigQuery dataset.

type Dataset struct {
    ProjectID string
    DatasetID string
    // contains filtered or unexported fields
}

func (*Dataset) Create

func (d *Dataset) Create(ctx context.Context) error

Create creates a dataset in the BigQuery service. An error will be returned if the dataset already exists.

Example

Code:

ctx := context.Background()
client, err := bigquery.NewClient(ctx, "project-id")
if err != nil {
    // TODO: Handle error.
}
if err := client.Dataset("my_dataset").Create(ctx); err != nil {
    // TODO: Handle error.
}

func (*Dataset) Delete

func (d *Dataset) Delete(ctx context.Context) error

Delete deletes the dataset.

Example

Code:

ctx := context.Background()
client, err := bigquery.NewClient(ctx, "project-id")
if err != nil {
    // TODO: Handle error.
}
if err := client.Dataset("my_dataset").Delete(ctx); err != nil {
    // TODO: Handle error.
}

func (*Dataset) Metadata

func (d *Dataset) Metadata(ctx context.Context) (*DatasetMetadata, error)

Metadata fetches the metadata for the dataset.

Example

Code:

ctx := context.Background()
client, err := bigquery.NewClient(ctx, "project-id")
if err != nil {
    // TODO: Handle error.
}
md, err := client.Dataset("my_dataset").Metadata(ctx)
if err != nil {
    // TODO: Handle error.
}
fmt.Println(md)

func (*Dataset) Table

func (d *Dataset) Table(tableID string) *Table

Table creates a handle to a BigQuery table in the dataset. To determine if a table exists, call Table.Metadata. If the table does not already exist, use Table.Create to create it.

Example

Code:

ctx := context.Background()
client, err := bigquery.NewClient(ctx, "project-id")
if err != nil {
    // TODO: Handle error.
}
// Table creates a reference to the table. It does not create the actual
// table in BigQuery; to do so, use Table.Create.
t := client.Dataset("my_dataset").Table("my_table")
fmt.Println(t)

func (*Dataset) Tables

func (d *Dataset) Tables(ctx context.Context) *TableIterator

Tables returns an iterator over the tables in the Dataset.

Example

Code:

ctx := context.Background()
client, err := bigquery.NewClient(ctx, "project-id")
if err != nil {
    // TODO: Handle error.
}
it := client.Dataset("my_dataset").Tables(ctx)
_ = it // TODO: iterate using Next or iterator.Pager.

type DatasetIterator

DatasetIterator iterates over the datasets in a project.

type DatasetIterator struct {
    // ListHidden causes hidden datasets to be listed when set to true.
    ListHidden bool

    // Filter restricts the datasets returned by label. The filter syntax is described in
    // https://cloud.google.com/bigquery/docs/labeling-datasets#filtering_datasets_using_labels
    Filter string
    // contains filtered or unexported fields
}

func (*DatasetIterator) Next

func (it *DatasetIterator) Next() (*Dataset, error)

Example

Code:

ctx := context.Background()
client, err := bigquery.NewClient(ctx, "project-id")
if err != nil {
    // TODO: Handle error.
}
it := client.Datasets(ctx)
for {
    ds, err := it.Next()
    if err == iterator.Done {
        break
    }
    if err != nil {
        // TODO: Handle error.
    }
    fmt.Println(ds)
}

func (*DatasetIterator) PageInfo

func (it *DatasetIterator) PageInfo() *iterator.PageInfo

PageInfo supports pagination. See the google.golang.org/api/iterator package for details.

type DatasetMetadata

type DatasetMetadata struct {
    CreationTime           time.Time
    LastModifiedTime       time.Time // When the dataset or any of its tables were modified.
    DefaultTableExpiration time.Duration
    Description            string // The user-friendly description of this table.
    Name                   string // The user-friendly name for this table.
    ID                     string
    Location               string            // The geo location of the dataset.
    Labels                 map[string]string // User-provided labels.

}

type Encoding

Encoding specifies the character encoding of data to be loaded into BigQuery. See https://cloud.google.com/bigquery/docs/reference/v2/jobs#configuration.load.encoding for more details about how this is used.

type Encoding string
const (
    UTF_8      Encoding = "UTF-8"
    ISO_8859_1 Encoding = "ISO-8859-1"
)

type Error

An Error contains detailed information about a failed bigquery operation.

type Error struct {
    // Mirrors bq.ErrorProto, but drops DebugInfo
    Location, Message, Reason string
}

func (Error) Error

func (e Error) Error() string

type ExplainQueryStage

ExplainQueryStage describes one stage of a query.

type ExplainQueryStage struct {
    // Relative amount of the total time the average shard spent on CPU-bound tasks.
    ComputeRatioAvg float64

    // Relative amount of the total time the slowest shard spent on CPU-bound tasks.
    ComputeRatioMax float64

    // Unique ID for stage within plan.
    ID int64

    // Human-readable name for stage.
    Name string

    // Relative amount of the total time the average shard spent reading input.
    ReadRatioAvg float64

    // Relative amount of the total time the slowest shard spent reading input.
    ReadRatioMax float64

    // Number of records read into the stage.
    RecordsRead int64

    // Number of records written by the stage.
    RecordsWritten int64

    // Current status for the stage.
    Status string

    // List of operations within the stage in dependency order (approximately
    // chronological).
    Steps []*ExplainQueryStep

    // Relative amount of the total time the average shard spent waiting to be scheduled.
    WaitRatioAvg float64

    // Relative amount of the total time the slowest shard spent waiting to be scheduled.
    WaitRatioMax float64

    // Relative amount of the total time the average shard spent on writing output.
    WriteRatioAvg float64

    // Relative amount of the total time the slowest shard spent on writing output.
    WriteRatioMax float64
}

type ExplainQueryStep

ExplainQueryStep describes one step of a query stage.

type ExplainQueryStep struct {
    // Machine-readable operation type.
    Kind string

    // Human-readable stage descriptions.
    Substeps []string
}

type ExternalData

ExternalData is a table which is stored outside of BigQuery. It is implemented by GCSReference.

type ExternalData interface {
    // contains filtered or unexported methods
}

type ExtractConfig

ExtractConfig holds the configuration for an extract job.

type ExtractConfig struct {
    // JobID is the ID to use for the extract job. If empty, a job ID will be automatically created.
    JobID string

    // Src is the table from which data will be extracted.
    Src *Table

    // Dst is the destination into which the data will be extracted.
    Dst *GCSReference

    // DisableHeader disables the printing of a header row in exported data.
    DisableHeader bool
}

type ExtractStatistics

ExtractStatistics contains statistics about an extract job.

type ExtractStatistics struct {
    // The number of files per destination URI or URI pattern specified in the
    // extract configuration. These values will be in the same order as the
    // URIs specified in the 'destinationUris' field.
    DestinationURIFileCounts []int64
}

type Extractor

An Extractor extracts data from a BigQuery table into Google Cloud Storage.

type Extractor struct {
    ExtractConfig
    // contains filtered or unexported fields
}

func (*Extractor) Run

func (e *Extractor) Run(ctx context.Context) (*Job, error)

Run initiates an extract job.

type FieldSchema

type FieldSchema struct {
    // The field name.
    // Must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_),
    // and must start with a letter or underscore.
    // The maximum length is 128 characters.
    Name string

    // A description of the field. The maximum length is 16,384 characters.
    Description string

    // Whether the field may contain multiple values.
    Repeated bool
    // Whether the field is required.  Ignored if Repeated is true.
    Required bool

    // The field data type.  If Type is Record, then this field contains a nested schema,
    // which is described by Schema.
    Type FieldType
    // Describes the nested schema if Type is set to Record.
    Schema Schema
}

type FieldType

type FieldType string
const (
    StringFieldType    FieldType = "STRING"
    BytesFieldType     FieldType = "BYTES"
    IntegerFieldType   FieldType = "INTEGER"
    FloatFieldType     FieldType = "FLOAT"
    BooleanFieldType   FieldType = "BOOLEAN"
    TimestampFieldType FieldType = "TIMESTAMP"
    RecordFieldType    FieldType = "RECORD"
    DateFieldType      FieldType = "DATE"
    TimeFieldType      FieldType = "TIME"
    DateTimeFieldType  FieldType = "DATETIME"
)

type FileConfig

FileConfig contains configuration options that pertain to files, typically text files that require interpretation to be used as a BigQuery table. A file may live in Google Cloud Storage (see GCSReference), or it may be loaded into a table via the Table.LoaderFromReader.

type FileConfig struct {
    // SourceFormat is the format of the GCS data to be read.
    // Allowed values are: CSV, Avro, JSON, DatastoreBackup.  The default is CSV.
    SourceFormat DataFormat

    // FieldDelimiter is the separator for fields in a CSV file, used when
    // reading or exporting data. The default is ",".
    FieldDelimiter string

    // The number of rows at the top of a CSV file that BigQuery will skip when
    // reading data.
    SkipLeadingRows int64

    // AllowJaggedRows causes missing trailing optional columns to be tolerated
    // when reading CSV data. Missing values are treated as nulls.
    AllowJaggedRows bool

    // AllowQuotedNewlines sets whether quoted data sections containing
    // newlines are allowed when reading CSV data.
    AllowQuotedNewlines bool

    // Indicates if we should automatically infer the options and
    // schema for CSV and JSON sources.
    AutoDetect bool

    // Encoding is the character encoding of data to be read.
    Encoding Encoding

    // MaxBadRecords is the maximum number of bad records that will be ignored
    // when reading data.
    MaxBadRecords int64

    // IgnoreUnknownValues causes values not matching the schema to be
    // tolerated. Unknown values are ignored. For CSV this ignores extra values
    // at the end of a line. For JSON this ignores named values that do not
    // match any column name. If this field is not set, records containing
    // unknown values are treated as bad records. The MaxBadRecords field can
    // be used to customize how bad records are handled.
    IgnoreUnknownValues bool

    // Schema describes the data. It is required when reading CSV or JSON data,
    // unless the data is being loaded into a table that already exists.
    Schema Schema

    // Quote is the value used to quote data sections in a CSV file. The
    // default quotation character is the double quote ("), which is used if
    // both Quote and ForceZeroQuote are unset.
    // To specify that no character should be interpreted as a quotation
    // character, set ForceZeroQuote to true.
    // Only used when reading data.
    Quote          string
    ForceZeroQuote bool
}

type GCSReference

GCSReference is a reference to one or more Google Cloud Storage objects, which together constitute an input or output to a BigQuery operation.

type GCSReference struct {
    FileConfig

    // DestinationFormat is the format to use when writing exported files.
    // Allowed values are: CSV, Avro, JSON.  The default is CSV.
    // CSV is not supported for tables with nested or repeated fields.
    DestinationFormat DataFormat

    // Compression specifies the type of compression to apply when writing data
    // to Google Cloud Storage, or using this GCSReference as an ExternalData
    // source with CSV or JSON SourceFormat. Default is None.
    Compression Compression
    // contains filtered or unexported fields
}

func NewGCSReference

func NewGCSReference(uri ...string) *GCSReference

NewGCSReference constructs a reference to one or more Google Cloud Storage objects, which together constitute a data source or destination. In the simple case, a single URI in the form gs://bucket/object may refer to a single GCS object. Data may also be split into mutiple files, if multiple URIs or URIs containing wildcards are provided. Each URI may contain one '*' wildcard character, which (if present) must come after the bucket name. For more information about the treatment of wildcards and multiple URIs, see https://cloud.google.com/bigquery/exporting-data-from-bigquery#exportingmultiple

Example

Code:

gcsRef := bigquery.NewGCSReference("gs://my-bucket/my-object")
fmt.Println(gcsRef)

type Job

A Job represents an operation which has been submitted to BigQuery for processing.

type Job struct {
    // contains filtered or unexported fields
}

func (*Job) Cancel

func (j *Job) Cancel(ctx context.Context) error

Cancel requests that a job be cancelled. This method returns without waiting for cancellation to take effect. To check whether the job has terminated, use Job.Status. Cancelled jobs may still incur costs.

func (*Job) ID

func (j *Job) ID() string

func (*Job) Read

func (j *Job) Read(ctx context.Context) (*RowIterator, error)

Read fetches the results of a query job. If j is not a query job, Read returns an error.

Example

Code:

ctx := context.Background()
client, err := bigquery.NewClient(ctx, "project-id")
if err != nil {
    // TODO: Handle error.
}
q := client.Query("select name, num from t1")
// Call Query.Run to get a Job, then call Read on the job.
// Note: Query.Read is a shorthand for this.
job, err := q.Run(ctx)
if err != nil {
    // TODO: Handle error.
}
it, err := job.Read(ctx)
if err != nil {
    // TODO: Handle error.
}
_ = it // TODO: iterate using Next or iterator.Pager.

func (*Job) Status

func (j *Job) Status(ctx context.Context) (*JobStatus, error)

Status returns the current status of the job. It fails if the Status could not be determined.

func (*Job) Wait

func (j *Job) Wait(ctx context.Context) (*JobStatus, error)

Wait blocks until the job or the context is done. It returns the final status of the job. If an error occurs while retrieving the status, Wait returns that error. But Wait returns nil if the status was retrieved successfully, even if status.Err() != nil. So callers must check both errors. See the example.

Example

Code:

ctx := context.Background()
client, err := bigquery.NewClient(ctx, "project-id")
if err != nil {
    // TODO: Handle error.
}
ds := client.Dataset("my_dataset")
job, err := ds.Table("t1").CopierFrom(ds.Table("t2")).Run(ctx)
if err != nil {
    // TODO: Handle error.
}
status, err := job.Wait(ctx)
if err != nil {
    // TODO: Handle error.
}
if status.Err() != nil {
    // TODO: Handle error.
}

type JobStatistics

JobStatistics contains statistics about a job.

type JobStatistics struct {
    CreationTime        time.Time
    StartTime           time.Time
    EndTime             time.Time
    TotalBytesProcessed int64

    Details Statistics
}

type JobStatus

JobStatus contains the current State of a job, and errors encountered while processing that job.

type JobStatus struct {
    State State

    // All errors encountered during the running of the job.
    // Not all Errors are fatal, so errors here do not necessarily mean that the job has completed or was unsuccessful.
    Errors []*Error

    // Statistics about the job.
    Statistics *JobStatistics
    // contains filtered or unexported fields
}

func (*JobStatus) Done

func (s *JobStatus) Done() bool

Done reports whether the job has completed. After Done returns true, the Err method will return an error if the job completed unsuccesfully.

func (*JobStatus) Err

func (s *JobStatus) Err() error

Err returns the error that caused the job to complete unsuccesfully (if any).

type LoadConfig

LoadConfig holds the configuration for a load job.

type LoadConfig struct {
    // JobID is the ID to use for the load job. If unset, a job ID will be automatically created.
    JobID string

    // Src is the source from which data will be loaded.
    Src LoadSource

    // Dst is the table into which the data will be loaded.
    Dst *Table

    // CreateDisposition specifies the circumstances under which the destination table will be created.
    // The default is CreateIfNeeded.
    CreateDisposition TableCreateDisposition

    // WriteDisposition specifies how existing data in the destination table is treated.
    // The default is WriteAppend.
    WriteDisposition TableWriteDisposition
}

type LoadSource

A LoadSource represents a source of data that can be loaded into a BigQuery table.

This package defines two LoadSources: GCSReference, for Google Cloud Storage objects, and ReaderSource, for data read from an io.Reader.

type LoadSource interface {
    // contains filtered or unexported methods
}

type LoadStatistics

LoadStatistics contains statistics about a load job.

type LoadStatistics struct {
    // The number of bytes of source data in a load job.
    InputFileBytes int64

    // The number of source files in a load job.
    InputFiles int64

    // Size of the loaded data in bytes. Note that while a load job is in the
    // running state, this value may change.
    OutputBytes int64

    // The number of rows imported in a load job. Note that while an import job is
    // in the running state, this value may change.
    OutputRows int64
}

type Loader

A Loader loads data from Google Cloud Storage into a BigQuery table.

type Loader struct {
    LoadConfig
    // contains filtered or unexported fields
}

func (*Loader) Run

func (l *Loader) Run(ctx context.Context) (*Job, error)

Run initiates a load job.

type MultiError

A MultiError contains multiple related errors.

type MultiError []error

func (MultiError) Error

func (m MultiError) Error() string

type PutMultiError

PutMultiError contains an error for each row which was not successfully inserted into a BigQuery table.

type PutMultiError []RowInsertionError

func (PutMultiError) Error

func (pme PutMultiError) Error() string

type Query

A Query queries data from a BigQuery table. Use Client.Query to create a Query.

type Query struct {
    QueryConfig
    // contains filtered or unexported fields
}

func (*Query) Read

func (q *Query) Read(ctx context.Context) (*RowIterator, error)

Read submits a query for execution and returns the results via a RowIterator. It is a shorthand for Query.Run followed by Job.Read.

Example

Code:

ctx := context.Background()
client, err := bigquery.NewClient(ctx, "project-id")
if err != nil {
    // TODO: Handle error.
}
q := client.Query("select name, num from t1")
it, err := q.Read(ctx)
if err != nil {
    // TODO: Handle error.
}
_ = it // TODO: iterate using Next or iterator.Pager.

func (*Query) Run

func (q *Query) Run(ctx context.Context) (*Job, error)

Run initiates a query job.

type QueryConfig

QueryConfig holds the configuration for a query job.

type QueryConfig struct {
    // JobID is the ID to use for the query job. If this field is empty, a job ID
    // will be automatically created.
    JobID string

    // Dst is the table into which the results of the query will be written.
    // If this field is nil, a temporary table will be created.
    Dst *Table

    // The query to execute. See https://cloud.google.com/bigquery/query-reference for details.
    Q string

    // DefaultProjectID and DefaultDatasetID specify the dataset to use for unqualified table names in the query.
    // If DefaultProjectID is set, DefaultDatasetID must also be set.
    DefaultProjectID string
    DefaultDatasetID string

    // TableDefinitions describes data sources outside of BigQuery.
    // The map keys may be used as table names in the query string.
    TableDefinitions map[string]ExternalData

    // CreateDisposition specifies the circumstances under which the destination table will be created.
    // The default is CreateIfNeeded.
    CreateDisposition TableCreateDisposition

    // WriteDisposition specifies how existing data in the destination table is treated.
    // The default is WriteEmpty.
    WriteDisposition TableWriteDisposition

    // DisableQueryCache prevents results being fetched from the query cache.
    // If this field is false, results are fetched from the cache if they are available.
    // The query cache is a best-effort cache that is flushed whenever tables in the query are modified.
    // Cached results are only available when TableID is unspecified in the query's destination Table.
    // For more information, see https://cloud.google.com/bigquery/querying-data#querycaching
    DisableQueryCache bool

    // DisableFlattenedResults prevents results being flattened.
    // If this field is false, results from nested and repeated fields are flattened.
    // DisableFlattenedResults implies AllowLargeResults
    // For more information, see https://cloud.google.com/bigquery/docs/data#nested
    DisableFlattenedResults bool

    // AllowLargeResults allows the query to produce arbitrarily large result tables.
    // The destination must be a table.
    // When using this option, queries will take longer to execute, even if the result set is small.
    // For additional limitations, see https://cloud.google.com/bigquery/querying-data#largequeryresults
    AllowLargeResults bool

    // Priority specifies the priority with which to schedule the query.
    // The default priority is InteractivePriority.
    // For more information, see https://cloud.google.com/bigquery/querying-data#batchqueries
    Priority QueryPriority

    // MaxBillingTier sets the maximum billing tier for a Query.
    // Queries that have resource usage beyond this tier will fail (without
    // incurring a charge). If this field is zero, the project default will be used.
    MaxBillingTier int

    // MaxBytesBilled limits the number of bytes billed for
    // this job.  Queries that would exceed this limit will fail (without incurring
    // a charge).
    // If this field is less than 1, the project default will be
    // used.
    MaxBytesBilled int64

    // UseStandardSQL causes the query to use standard SQL.
    // The default is false (using legacy SQL).
    UseStandardSQL bool

    // Parameters is a list of query parameters. The presence of parameters
    // implies the use of standard SQL.
    // If the query uses positional syntax ("?"), then no parameter may have a name.
    // If the query uses named syntax ("@p"), then all parameters must have names.
    // It is illegal to mix positional and named syntax.
    Parameters []QueryParameter
}

type QueryParameter

A QueryParameter is a parameter to a query.

type QueryParameter struct {
    // Name is used for named parameter mode.
    // It must match the name in the query case-insensitively.
    Name string

    // Value is the value of the parameter.
    // The following Go types are supported, with their corresponding
    // Bigquery types:
    // int, int8, int16, int32, int64, uint8, uint16, uint32: INT64
    //   Note that uint, uint64 and uintptr are not supported, because
    //   they may contain values that cannot fit into a 64-bit signed integer.
    // float32, float64: FLOAT64
    // bool: BOOL
    // string: STRING
    // []byte: BYTES
    // time.Time: TIMESTAMP
    // Arrays and slices of the above.
    // Structs of the above. Only the exported fields are used.
    Value interface{}
}

type QueryPriority

QueryPriority specifies a priority with which a query is to be executed.

type QueryPriority string
const (
    BatchPriority       QueryPriority = "BATCH"
    InteractivePriority QueryPriority = "INTERACTIVE"
)

type QueryStatistics

QueryStatistics contains statistics about a query job.

type QueryStatistics struct {
    // Billing tier for the job.
    BillingTier int64

    // Whether the query result was fetched from the query cache.
    CacheHit bool

    // The type of query statement, if valid.
    StatementType string

    // Total bytes billed for the job.
    TotalBytesBilled int64

    // Total bytes processed for the job.
    TotalBytesProcessed int64

    // Describes execution plan for the query.
    QueryPlan []*ExplainQueryStage

    // The number of rows affected by a DML statement. Present only for DML
    // statements INSERT, UPDATE or DELETE.
    NumDMLAffectedRows int64

    // ReferencedTables: [Output-only, Experimental] Referenced tables for
    // the job. Queries that reference more than 50 tables will not have a
    // complete list.
    ReferencedTables []*Table

    // The schema of the results. Present only for successful dry run of
    // non-legacy SQL queries.
    Schema Schema

    // Standard SQL: list of undeclared query parameter names detected during a
    // dry run validation.
    UndeclaredQueryParameterNames []string
}

type ReaderSource

A ReaderSource is a source for a load operation that gets data from an io.Reader.

type ReaderSource struct {
    FileConfig
    // contains filtered or unexported fields
}

func NewReaderSource

func NewReaderSource(r io.Reader) *ReaderSource

NewReaderSource creates a ReaderSource from an io.Reader. You may optionally configure properties on the ReaderSource that describe the data being read, before passing it to Table.LoaderFrom.

type RowInsertionError

RowInsertionError contains all errors that occurred when attempting to insert a row.

type RowInsertionError struct {
    InsertID string // The InsertID associated with the affected row.
    RowIndex int    // The 0-based index of the affected row in the batch of rows being inserted.
    Errors   MultiError
}

func (*RowInsertionError) Error

func (e *RowInsertionError) Error() string

type RowIterator

A RowIterator provides access to the result of a BigQuery lookup.

type RowIterator struct {

    // StartIndex can be set before the first call to Next. If PageInfo().Token
    // is also set, StartIndex is ignored.
    StartIndex uint64
    // contains filtered or unexported fields
}

func (*RowIterator) Next

func (it *RowIterator) Next(dst interface{}) error

Next loads the next row into dst. Its return value is iterator.Done if there are no more results. Once Next returns iterator.Done, all subsequent calls will return iterator.Done.

dst may implement ValueLoader, or may be a *[]Value, *map[string]Value, or struct pointer.

If dst is a *[]Value, it will be set to to new []Value whose i'th element will be populated with the i'th column of the row.

If dst is a *map[string]Value, a new map will be created if dst is nil. Then for each schema column name, the map key of that name will be set to the column's value.

If dst is pointer to a struct, each column in the schema will be matched with an exported field of the struct that has the same name, ignoring case. Unmatched schema columns and struct fields will be ignored.

Each BigQuery column type corresponds to one or more Go types; a matching struct field must be of the correct type. The correspondences are:

STRING      string
BOOL        bool
INTEGER     int, int8, int16, int32, int64, uint8, uint16, uint32
FLOAT       float32, float64
BYTES       []byte
TIMESTAMP   time.Time
DATE        civil.Date
TIME        civil.Time
DATETIME    civil.DateTime

A repeated field corresponds to a slice or array of the element type. A RECORD type (nested schema) corresponds to a nested struct or struct pointer. All calls to Next on the same iterator must use the same struct type.

It is an error to attempt to read a BigQuery NULL value into a struct field. If your table contains NULLs, use a *[]Value or *map[string]Value.

Example

Code:

ctx := context.Background()
client, err := bigquery.NewClient(ctx, "project-id")
if err != nil {
    // TODO: Handle error.
}
q := client.Query("select name, num from t1")
it, err := q.Read(ctx)
if err != nil {
    // TODO: Handle error.
}
for {
    var row []bigquery.Value
    err := it.Next(&row)
    if err == iterator.Done {
        break
    }
    if err != nil {
        // TODO: Handle error.
    }
    fmt.Println(row)
}

Example (Struct)

Code:

ctx := context.Background()
client, err := bigquery.NewClient(ctx, "project-id")
if err != nil {
    // TODO: Handle error.
}

type score struct {
    Name string
    Num  int
}

q := client.Query("select name, num from t1")
it, err := q.Read(ctx)
if err != nil {
    // TODO: Handle error.
}
for {
    var s score
    err := it.Next(&s)
    if err == iterator.Done {
        break
    }
    if err != nil {
        // TODO: Handle error.
    }
    fmt.Println(s)
}

func (*RowIterator) PageInfo

func (it *RowIterator) PageInfo() *iterator.PageInfo

PageInfo supports pagination. See the google.golang.org/api/iterator package for details.

type Schema

Schema describes the fields in a table or query result.

type Schema []*FieldSchema

func InferSchema

func InferSchema(st interface{}) (Schema, error)

InferSchema tries to derive a BigQuery schema from the supplied struct value. NOTE: All fields in the returned Schema are configured to be required, unless the corresponding field in the supplied struct is a slice or array.

It is considered an error if the struct (including nested structs) contains any exported fields that are pointers or one of the following types: uint, uint64, uintptr, map, interface, complex64, complex128, func, chan. In these cases, an error will be returned. Future versions may handle these cases without error.

Recursively defined structs are also disallowed.

Example

Code:

type Item struct {
    Name  string
    Size  float64
    Count int
}
schema, err := bigquery.InferSchema(Item{})
if err != nil {
    fmt.Println(err)
    // TODO: Handle error.
}
for _, fs := range schema {
    fmt.Println(fs.Name, fs.Type)
}

Output:

Name STRING
Size FLOAT
Count INTEGER

Example (Tags)

Code:

type Item struct {
    Name   string
    Size   float64
    Count  int    `bigquery:"number"`
    Secret []byte `bigquery:"-"`
}
schema, err := bigquery.InferSchema(Item{})
if err != nil {
    fmt.Println(err)
    // TODO: Handle error.
}
for _, fs := range schema {
    fmt.Println(fs.Name, fs.Type)
}

Output:

Name STRING
Size FLOAT
number INTEGER

type State

State is one of a sequence of states that a Job progresses through as it is processed.

type State int
const (
    Pending State = iota
    Running
    Done
)

type Statistics

Statistics is one of ExtractStatistics, LoadStatistics or QueryStatistics.

type Statistics interface {
    // contains filtered or unexported methods
}

type StreamingBuffer

StreamingBuffer holds information about the streaming buffer.

type StreamingBuffer struct {
    // A lower-bound estimate of the number of bytes currently in the streaming
    // buffer.
    EstimatedBytes uint64

    // A lower-bound estimate of the number of rows currently in the streaming
    // buffer.
    EstimatedRows uint64

    // The time of the oldest entry in the streaming buffer.
    OldestEntryTime time.Time
}

type StructSaver

StructSaver implements ValueSaver for a struct. The struct is converted to a map of values by using the values of struct fields corresponding to schema fields. Additional and missing fields are ignored, as are nested struct pointers that are nil.

type StructSaver struct {
    // Schema determines what fields of the struct are uploaded. It should
    // match the table's schema.
    Schema Schema

    // If non-empty, BigQuery will use InsertID to de-duplicate insertions
    // of this row on a best-effort basis.
    InsertID string

    // Struct should be a struct or a pointer to a struct.
    Struct interface{}
}

func (*StructSaver) Save

func (ss *StructSaver) Save() (row map[string]Value, insertID string, err error)

Save implements ValueSaver.

type Table

A Table is a reference to a BigQuery table.

type Table struct {
    // ProjectID, DatasetID and TableID may be omitted if the Table is the destination for a query.
    // In this case the result will be stored in an ephemeral table.
    ProjectID string
    DatasetID string
    // TableID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_).
    // The maximum length is 1,024 characters.
    TableID string
    // contains filtered or unexported fields
}

func (*Table) CopierFrom

func (t *Table) CopierFrom(srcs ...*Table) *Copier

CopierFrom returns a Copier which can be used to copy data into a BigQuery table from one or more BigQuery tables. The returned Copier may optionally be further configured before its Run method is called.

Example

Code:

ctx := context.Background()
client, err := bigquery.NewClient(ctx, "project-id")
if err != nil {
    // TODO: Handle error.
}
ds := client.Dataset("my_dataset")
c := ds.Table("combined").CopierFrom(ds.Table("t1"), ds.Table("t2"))
c.WriteDisposition = bigquery.WriteTruncate
// TODO: set other options on the Copier.
job, err := c.Run(ctx)
if err != nil {
    // TODO: Handle error.
}
status, err := job.Wait(ctx)
if err != nil {
    // TODO: Handle error.
}
if status.Err() != nil {
    // TODO: Handle error.
}

func (*Table) Create

func (t *Table) Create(ctx context.Context, options ...CreateTableOption) error

Create creates a table in the BigQuery service.

Example

Code:

ctx := context.Background()
client, err := bigquery.NewClient(ctx, "project-id")
if err != nil {
    // TODO: Handle error.
}
t := client.Dataset("my_dataset").Table("new-table")
if err := t.Create(ctx); err != nil {
    // TODO: Handle error.
}

Example (Schema)

Code:

ctx := context.Background()
// Infer table schema from a Go type.
schema, err := bigquery.InferSchema(Item{})
if err != nil {
    // TODO: Handle error.
}
client, err := bigquery.NewClient(ctx, "project-id")
if err != nil {
    // TODO: Handle error.
}
t := client.Dataset("my_dataset").Table("new-table")
if err := t.Create(ctx, schema); err != nil {
    // TODO: Handle error.
}

func (*Table) Delete

func (t *Table) Delete(ctx context.Context) error

Delete deletes the table.

Example

Code:

ctx := context.Background()
client, err := bigquery.NewClient(ctx, "project-id")
if err != nil {
    // TODO: Handle error.
}
if err := client.Dataset("my_dataset").Table("my_table").Delete(ctx); err != nil {
    // TODO: Handle error.
}

func (*Table) ExtractorTo

func (t *Table) ExtractorTo(dst *GCSReference) *Extractor

ExtractorTo returns an Extractor which can be used to extract data from a BigQuery table into Google Cloud Storage. The returned Extractor may optionally be further configured before its Run method is called.

Example

Code:

ctx := context.Background()
client, err := bigquery.NewClient(ctx, "project-id")
if err != nil {
    // TODO: Handle error.
}
gcsRef := bigquery.NewGCSReference("gs://my-bucket/my-object")
gcsRef.FieldDelimiter = ":"
// TODO: set other options on the GCSReference.
ds := client.Dataset("my_dataset")
extractor := ds.Table("my_table").ExtractorTo(gcsRef)
extractor.DisableHeader = true
// TODO: set other options on the Extractor.
job, err := extractor.Run(ctx)
if err != nil {
    // TODO: Handle error.
}
status, err := job.Wait(ctx)
if err != nil {
    // TODO: Handle error.
}
if status.Err() != nil {
    // TODO: Handle error.
}

func (*Table) FullyQualifiedName

func (t *Table) FullyQualifiedName() string

FullyQualifiedName returns the ID of the table in projectID:datasetID.tableID format.

func (*Table) LoaderFrom

func (t *Table) LoaderFrom(src LoadSource) *Loader

LoaderFrom returns a Loader which can be used to load data into a BigQuery table. The returned Loader may optionally be further configured before its Run method is called.

Example

Code:

ctx := context.Background()
client, err := bigquery.NewClient(ctx, "project-id")
if err != nil {
    // TODO: Handle error.
}
gcsRef := bigquery.NewGCSReference("gs://my-bucket/my-object")
gcsRef.AllowJaggedRows = true
// TODO: set other options on the GCSReference.
ds := client.Dataset("my_dataset")
loader := ds.Table("my_table").LoaderFrom(gcsRef)
loader.CreateDisposition = bigquery.CreateNever
// TODO: set other options on the Loader.
job, err := loader.Run(ctx)
if err != nil {
    // TODO: Handle error.
}
status, err := job.Wait(ctx)
if err != nil {
    // TODO: Handle error.
}
if status.Err() != nil {
    // TODO: Handle error.
}

Example (Reader)

Code:

ctx := context.Background()
client, err := bigquery.NewClient(ctx, "project-id")
if err != nil {
    // TODO: Handle error.
}
f, err := os.Open("data.csv")
if err != nil {
    // TODO: Handle error.
}
rs := bigquery.NewReaderSource(f)
rs.AllowJaggedRows = true
// TODO: set other options on the GCSReference.
ds := client.Dataset("my_dataset")
loader := ds.Table("my_table").LoaderFrom(rs)
loader.CreateDisposition = bigquery.CreateNever
// TODO: set other options on the Loader.
job, err := loader.Run(ctx)
if err != nil {
    // TODO: Handle error.
}
status, err := job.Wait(ctx)
if err != nil {
    // TODO: Handle error.
}
if status.Err() != nil {
    // TODO: Handle error.
}

func (*Table) Metadata

func (t *Table) Metadata(ctx context.Context) (*TableMetadata, error)

Metadata fetches the metadata for the table.

Example

Code:

ctx := context.Background()
client, err := bigquery.NewClient(ctx, "project-id")
if err != nil {
    // TODO: Handle error.
}
md, err := client.Dataset("my_dataset").Table("my_table").Metadata(ctx)
if err != nil {
    // TODO: Handle error.
}
fmt.Println(md)

func (*Table) Read

func (t *Table) Read(ctx context.Context) *RowIterator

Read fetches the contents of the table.

Example

Code:

ctx := context.Background()
client, err := bigquery.NewClient(ctx, "project-id")
if err != nil {
    // TODO: Handle error.
}
it := client.Dataset("my_dataset").Table("my_table").Read(ctx)
_ = it // TODO: iterate using Next or iterator.Pager.

func (*Table) Update

func (t *Table) Update(ctx context.Context, tm TableMetadataToUpdate) (*TableMetadata, error)

Update modifies specific Table metadata fields.

Example

Code:

ctx := context.Background()
client, err := bigquery.NewClient(ctx, "project-id")
if err != nil {
    // TODO: Handle error.
}
t := client.Dataset("my_dataset").Table("my_table")
tm, err := t.Update(ctx, bigquery.TableMetadataToUpdate{
    Description: "my favorite table",
})
if err != nil {
    // TODO: Handle error.
}
fmt.Println(tm)

func (*Table) Uploader

func (t *Table) Uploader() *Uploader

Uploader returns an Uploader that can be used to append rows to t. The returned Uploader may optionally be further configured before its Put method is called.

Example

Code:

ctx := context.Background()
client, err := bigquery.NewClient(ctx, "project-id")
if err != nil {
    // TODO: Handle error.
}
u := client.Dataset("my_dataset").Table("my_table").Uploader()
_ = u // TODO: Use u.

Example (Options)

Code:

ctx := context.Background()
client, err := bigquery.NewClient(ctx, "project-id")
if err != nil {
    // TODO: Handle error.
}
u := client.Dataset("my_dataset").Table("my_table").Uploader()
u.SkipInvalidRows = true
u.IgnoreUnknownValues = true
_ = u // TODO: Use u.

type TableCreateDisposition

TableCreateDisposition specifies the circumstances under which destination table will be created. Default is CreateIfNeeded.

type TableCreateDisposition string
const (
    // CreateIfNeeded will create the table if it does not already exist.
    // Tables are created atomically on successful completion of a job.
    CreateIfNeeded TableCreateDisposition = "CREATE_IF_NEEDED"

    // CreateNever ensures the table must already exist and will not be
    // automatically created.
    CreateNever TableCreateDisposition = "CREATE_NEVER"
)

type TableIterator

A TableIterator is an iterator over Tables.

type TableIterator struct {
    // contains filtered or unexported fields
}

func (*TableIterator) Next

func (it *TableIterator) Next() (*Table, error)

Next returns the next result. Its second return value is Done if there are no more results. Once Next returns Done, all subsequent calls will return Done.

Example

Code:

ctx := context.Background()
client, err := bigquery.NewClient(ctx, "project-id")
if err != nil {
    // TODO: Handle error.
}
it := client.Dataset("my_dataset").Tables(ctx)
for {
    t, err := it.Next()
    if err == iterator.Done {
        break
    }
    if err != nil {
        // TODO: Handle error.
    }
    fmt.Println(t)
}

func (*TableIterator) PageInfo

func (it *TableIterator) PageInfo() *iterator.PageInfo

PageInfo supports pagination. See the google.golang.org/api/iterator package for details.

type TableMetadata

TableMetadata contains information about a BigQuery table.

type TableMetadata struct {
    Description string // The user-friendly description of this table.
    Name        string // The user-friendly name for this table.
    Schema      Schema
    View        string

    ID   string // An opaque ID uniquely identifying the table.
    Type TableType

    // The time when this table expires. If not set, the table will persist
    // indefinitely. Expired tables will be deleted and their storage reclaimed.
    ExpirationTime time.Time

    CreationTime     time.Time
    LastModifiedTime time.Time

    // The size of the table in bytes.
    // This does not include data that is being buffered during a streaming insert.
    NumBytes int64

    // The number of rows of data in this table.
    // This does not include data that is being buffered during a streaming insert.
    NumRows uint64

    // The time-based partitioning settings for this table.
    TimePartitioning *TimePartitioning

    // Contains information regarding this table's streaming buffer, if one is
    // present. This field will be nil if the table is not being streamed to or if
    // there is no data in the streaming buffer.
    StreamingBuffer *StreamingBuffer
}

type TableMetadataToUpdate

TableMetadataToUpdate is used when updating a table's metadata. Only non-nil fields will be updated.

type TableMetadataToUpdate struct {
    // Description is the user-friendly description of this table.
    Description optional.String

    // Name is the user-friendly name for this table.
    Name optional.String

    // Schema is the table's schema.
    // When updating a schema, you can add columns but not remove them.
    Schema Schema
}

type TableType

TableType is the type of table.

type TableType string
const (
    RegularTable  TableType = "TABLE"
    ViewTable     TableType = "VIEW"
    ExternalTable TableType = "EXTERNAL"
)

type TableWriteDisposition

TableWriteDisposition specifies how existing data in a destination table is treated. Default is WriteAppend.

type TableWriteDisposition string
const (
    // WriteAppend will append to any existing data in the destination table.
    // Data is appended atomically on successful completion of a job.
    WriteAppend TableWriteDisposition = "WRITE_APPEND"

    // WriteTruncate overrides the existing data in the destination table.
    // Data is overwritten atomically on successful completion of a job.
    WriteTruncate TableWriteDisposition = "WRITE_TRUNCATE"

    // WriteEmpty fails writes if the destination table already contains data.
    WriteEmpty TableWriteDisposition = "WRITE_EMPTY"
)

type TimePartitioning

TimePartitioning is a CreateTableOption that can be used to set time-based date partitioning on a table. For more information see: https://cloud.google.com/bigquery/docs/creating-partitioned-tables

type TimePartitioning struct {
    // (Optional) The amount of time to keep the storage for a partition.
    // If the duration is empty (0), the data in the partitions do not expire.
    Expiration time.Duration
}

type Uploader

An Uploader does streaming inserts into a BigQuery table. It is safe for concurrent use.

type Uploader struct {

    // SkipInvalidRows causes rows containing invalid data to be silently
    // ignored. The default value is false, which causes the entire request to
    // fail if there is an attempt to insert an invalid row.
    SkipInvalidRows bool

    // IgnoreUnknownValues causes values not matching the schema to be ignored.
    // The default value is false, which causes records containing such values
    // to be treated as invalid records.
    IgnoreUnknownValues bool

    // A TableTemplateSuffix allows Uploaders to create tables automatically.
    //
    // Experimental: this option is experimental and may be modified or removed in future versions,
    // regardless of any other documented package stability guarantees.
    //
    // When you specify a suffix, the table you upload data to
    // will be used as a template for creating a new table, with the same schema,
    // called <table> + <suffix>.
    //
    // More information is available at
    // https://cloud.google.com/bigquery/streaming-data-into-bigquery#template-tables
    TableTemplateSuffix string
    // contains filtered or unexported fields
}

func (*Uploader) Put

func (u *Uploader) Put(ctx context.Context, src interface{}) error

Put uploads one or more rows to the BigQuery service.

If src is ValueSaver, then its Save method is called to produce a row for uploading.

If src is a struct or pointer to a struct, then a schema is inferred from it and used to create a StructSaver. The InsertID of the StructSaver will be empty.

If src is a slice of ValueSavers, structs, or struct pointers, then each element of the slice is treated as above, and multiple rows are uploaded.

Put returns a PutMultiError if one or more rows failed to be uploaded. The PutMultiError contains a RowInsertionError for each failed row.

Put will retry on temporary errors (see https://cloud.google.com/bigquery/troubleshooting-errors). This can result in duplicate rows if you do not use insert IDs. Also, if the error persists, the call will run indefinitely. Pass a context with a timeout to prevent hanging calls.

Example

Code:

ctx := context.Background()
client, err := bigquery.NewClient(ctx, "project-id")
if err != nil {
    // TODO: Handle error.
}
u := client.Dataset("my_dataset").Table("my_table").Uploader()
// Item implements the ValueSaver interface.
items := []*Item{
    {Name: "n1", Size: 32.6, Count: 7},
    {Name: "n2", Size: 4, Count: 2},
    {Name: "n3", Size: 101.5, Count: 1},
}
if err := u.Put(ctx, items); err != nil {
    // TODO: Handle error.
}

Example (Struct)

Code:

ctx := context.Background()
client, err := bigquery.NewClient(ctx, "project-id")
if err != nil {
    // TODO: Handle error.
}
u := client.Dataset("my_dataset").Table("my_table").Uploader()

type score struct {
    Name string
    Num  int
}
scores := []score{
    {Name: "n1", Num: 12},
    {Name: "n2", Num: 31},
    {Name: "n3", Num: 7},
}
// Schema is inferred from the score type.
if err := u.Put(ctx, scores); err != nil {
    // TODO: Handle error.
}

Example (StructSaver)

Code:

ctx := context.Background()
client, err := bigquery.NewClient(ctx, "project-id")
if err != nil {
    // TODO: Handle error.
}
u := client.Dataset("my_dataset").Table("my_table").Uploader()

type score struct {
    Name string
    Num  int
}

// Assume schema holds the table's schema.
savers := []*bigquery.StructSaver{
    {Struct: score{Name: "n1", Num: 12}, Schema: schema, InsertID: "id1"},
    {Struct: score{Name: "n2", Num: 31}, Schema: schema, InsertID: "id2"},
    {Struct: score{Name: "n3", Num: 7}, Schema: schema, InsertID: "id3"},
}
if err := u.Put(ctx, savers); err != nil {
    // TODO: Handle error.
}

type Value

Value stores the contents of a single cell from a BigQuery result.

type Value interface{}

type ValueLoader

ValueLoader stores a slice of Values representing a result row from a Read operation. See RowIterator.Next for more information.

type ValueLoader interface {
    Load(v []Value, s Schema) error
}

type ValueSaver

A ValueSaver returns a row of data to be inserted into a table.

type ValueSaver interface {
    // Save returns a row to be inserted into a BigQuery table, represented
    // as a map from field name to Value.
    // If insertID is non-empty, BigQuery will use it to de-duplicate
    // insertions of this row on a best-effort basis.
    Save() (row map[string]Value, insertID string, err error)
}

type ValuesSaver

ValuesSaver implements ValueSaver for a slice of Values.

type ValuesSaver struct {
    Schema Schema

    // If non-empty, BigQuery will use InsertID to de-duplicate insertions
    // of this row on a best-effort basis.
    InsertID string

    Row []Value
}

func (*ValuesSaver) Save

func (vls *ValuesSaver) Save() (map[string]Value, string, error)

Save implements ValueSaver.