storage - ActiveState ActiveGo 1.8
...

Package storage

import "cloud.google.com/go/storage"
Overview
Index
Examples

Overview ▾

Package storage provides an easy way to work with Google Cloud Storage. Google Cloud Storage stores data in named objects, which are grouped into buckets.

More information about Google Cloud Storage is available at https://cloud.google.com/storage/docs.

All of the methods of this package use exponential backoff to retry calls that fail with certain errors, as described in https://cloud.google.com/storage/docs/exponential-backoff.

Note: This package is in beta. Some backwards-incompatible changes may occur.

Creating a Client

To start working with this package, create a client:

ctx := context.Background()
client, err := storage.NewClient(ctx)
if err != nil {
    // TODO: Handle error.
}

Buckets

A Google Cloud Storage bucket is a collection of objects. To work with a bucket, make a bucket handle:

bkt := client.Bucket(bucketName)

A handle is a reference to a bucket. You can have a handle even if the bucket doesn't exist yet. To create a bucket in Google Cloud Storage, call Create on the handle:

if err := bkt.Create(ctx, projectID, nil); err != nil {
    // TODO: Handle error.
}

Note that although buckets are associated with projects, bucket names are global across all projects.

Each bucket has associated metadata, represented in this package by BucketAttrs. The third argument to BucketHandle.Create allows you to set the intial BucketAttrs of a bucket. To retrieve a bucket's attributes, use Attrs:

attrs, err := bkt.Attrs(ctx)
if err != nil {
    // TODO: Handle error.
}
fmt.Printf("bucket %s, created at %s, is located in %s with storage class %s\n",
    attrs.Name, attrs.Created, attrs.Location, attrs.StorageClass)

Objects

An object holds arbitrary data as a sequence of bytes, like a file. You refer to objects using a handle, just as with buckets. You can use the standard Go io.Reader and io.Writer interfaces to read and write object data:

obj := bkt.Object("data")
// Write something to obj.
// w implements io.Writer.
w := obj.NewWriter(ctx)
// Write some text to obj. This will overwrite whatever is there.
if _, err := fmt.Fprintf(w, "This object contains text.\n"); err != nil {
    // TODO: Handle error.
}
// Close, just like writing a file.
if err := w.Close(); err != nil {
    // TODO: Handle error.
}

// Read it back.
r, err := obj.NewReader(ctx)
if err != nil {
    // TODO: Handle error.
}
defer r.Close()
if _, err := io.Copy(os.Stdout, r); err != nil {
    // TODO: Handle error.
}
// Prints "This object contains text."

Objects also have attributes, which you can fetch with Attrs:

objAttrs, err := obj.Attrs(ctx)
if err != nil {
    // TODO: Handle error.
}
fmt.Printf("object %s has size %d and can be read using %s\n",
    objAttrs.Name, objAttrs.Size, objAttrs.MediaLink)

ACLs

Both objects and buckets have ACLs (Access Control Lists). An ACL is a list of ACLRules, each of which specifies the role of a user, group or project. ACLs are suitable for fine-grained control, but you may prefer using IAM to control access at the project level (see https://cloud.google.com/storage/docs/access-control/iam).

To list the ACLs of a bucket or object, obtain an ACLHandle and call its List method:

acls, err := obj.ACL().List(ctx)
if err != nil {
    // TODO: Handle error.
}
for _, rule := range acls {
    fmt.Printf("%s has role %s\n", rule.Entity, rule.Role)
}

You can also set and delete ACLs.

Conditions

Every object has a generation and a metageneration. The generation changes whenever the content changes, and the metageneration changes whenever the metadata changes. Conditions let you check these values before an operation; the operation only executes if the conditions match. You can use conditions to prevent race conditions in read-modify-write operations.

For example, say you've read an object's metadata into objAttrs. Now you want to write to that object, but only if its contents haven't changed since you read it. Here is how to express that:

w = obj.If(storage.Conditions{GenerationMatch: objAttrs.Generation}).NewWriter(ctx)
// Proceed with writing as above.

Signed URLs

You can obtain a URL that lets anyone read or write an object for a limited time. You don't need to create a client to do this. See the documentation of SignedURL for details.

url, err := storage.SignedURL(bucketName, "shared-object", opts)
if err != nil {
    // TODO: Handle error.
}
fmt.Println(url)

Authentication

See examples of authorization and authentication at https://godoc.org/cloud.google.com/go#pkg-examples.

Index ▾

Constants
Variables
func SignedURL(bucket, name string, opts *SignedURLOptions) (string, error)
type ACLEntity
type ACLHandle
    func (a *ACLHandle) Delete(ctx context.Context, entity ACLEntity) error
    func (a *ACLHandle) List(ctx context.Context) ([]ACLRule, error)
    func (a *ACLHandle) Set(ctx context.Context, entity ACLEntity, role ACLRole) error
type ACLRole
type ACLRule
type BucketAttrs
type BucketAttrsToUpdate
    func (ua *BucketAttrsToUpdate) DeleteLabel(name string)
    func (ua *BucketAttrsToUpdate) SetLabel(name, value string)
type BucketConditions
type BucketHandle
    func (b *BucketHandle) ACL() *ACLHandle
    func (b *BucketHandle) Attrs(ctx context.Context) (*BucketAttrs, error)
    func (b *BucketHandle) Create(ctx context.Context, projectID string, attrs *BucketAttrs) error
    func (b *BucketHandle) DefaultObjectACL() *ACLHandle
    func (b *BucketHandle) Delete(ctx context.Context) error
    func (b *BucketHandle) IAM() *iam.Handle
    func (b *BucketHandle) If(conds BucketConditions) *BucketHandle
    func (b *BucketHandle) Object(name string) *ObjectHandle
    func (b *BucketHandle) Objects(ctx context.Context, q *Query) *ObjectIterator
    func (b *BucketHandle) Update(ctx context.Context, uattrs BucketAttrsToUpdate) (*BucketAttrs, error)
type BucketIterator
    func (it *BucketIterator) Next() (*BucketAttrs, error)
    func (it *BucketIterator) PageInfo() *iterator.PageInfo
type Client
    func NewClient(ctx context.Context, opts ...option.ClientOption) (*Client, error)
    func (c *Client) Bucket(name string) *BucketHandle
    func (c *Client) Buckets(ctx context.Context, projectID string) *BucketIterator
    func (c *Client) Close() error
type Composer
    func (c *Composer) Run(ctx context.Context) (*ObjectAttrs, error)
type Conditions
type Copier
    func (c *Copier) Run(ctx context.Context) (*ObjectAttrs, error)
type ObjectAttrs
type ObjectAttrsToUpdate
type ObjectHandle
    func (o *ObjectHandle) ACL() *ACLHandle
    func (o *ObjectHandle) Attrs(ctx context.Context) (*ObjectAttrs, error)
    func (dst *ObjectHandle) ComposerFrom(srcs ...*ObjectHandle) *Composer
    func (dst *ObjectHandle) CopierFrom(src *ObjectHandle) *Copier
    func (o *ObjectHandle) Delete(ctx context.Context) error
    func (o *ObjectHandle) Generation(gen int64) *ObjectHandle
    func (o *ObjectHandle) If(conds Conditions) *ObjectHandle
    func (o *ObjectHandle) Key(encryptionKey []byte) *ObjectHandle
    func (o *ObjectHandle) NewRangeReader(ctx context.Context, offset, length int64) (*Reader, error)
    func (o *ObjectHandle) NewReader(ctx context.Context) (*Reader, error)
    func (o *ObjectHandle) NewWriter(ctx context.Context) *Writer
    func (o *ObjectHandle) Update(ctx context.Context, uattrs ObjectAttrsToUpdate) (*ObjectAttrs, error)
type ObjectIterator
    func (it *ObjectIterator) Next() (*ObjectAttrs, error)
    func (it *ObjectIterator) PageInfo() *iterator.PageInfo
type Query
type Reader
    func (r *Reader) Close() error
    func (r *Reader) ContentType() string
    func (r *Reader) Read(p []byte) (int, error)
    func (r *Reader) Remain() int64
    func (r *Reader) Size() int64
type SignedURLOptions
type Writer
    func (w *Writer) Attrs() *ObjectAttrs
    func (w *Writer) Close() error
    func (w *Writer) CloseWithError(err error) error
    func (w *Writer) Write(p []byte) (n int, err error)

Package files

acl.go bucket.go copy.go doc.go go17.go iam.go invoke.go reader.go storage.go writer.go

Constants

const (
    // ScopeFullControl grants permissions to manage your
    // data and permissions in Google Cloud Storage.
    ScopeFullControl = raw.DevstorageFullControlScope

    // ScopeReadOnly grants permissions to
    // view your data in Google Cloud Storage.
    ScopeReadOnly = raw.DevstorageReadOnlyScope

    // ScopeReadWrite grants permissions to manage your
    // data in Google Cloud Storage.
    ScopeReadWrite = raw.DevstorageReadWriteScope
)

Variables

var (
    ErrBucketNotExist = errors.New("storage: bucket doesn't exist")
    ErrObjectNotExist = errors.New("storage: object doesn't exist")
)

func SignedURL

func SignedURL(bucket, name string, opts *SignedURLOptions) (string, error)

SignedURL returns a URL for the specified object. Signed URLs allow the users access to a restricted resource for a limited time without having a Google account or signing in. For more information about the signed URLs, see https://cloud.google.com/storage/docs/accesscontrol#Signed-URLs.

Example

Code:

pkey, err := ioutil.ReadFile("my-private-key.pem")
if err != nil {
    // TODO: handle error.
}
url, err := storage.SignedURL("my-bucket", "my-object", &storage.SignedURLOptions{
    GoogleAccessID: "xxx@developer.gserviceaccount.com",
    PrivateKey:     pkey,
    Method:         "GET",
    Expires:        time.Now().Add(48 * time.Hour),
})
if err != nil {
    // TODO: handle error.
}
fmt.Println(url)

type ACLEntity

ACLEntity refers to a user or group. They are sometimes referred to as grantees.

It could be in the form of: "user-<userId>", "user-<email>", "group-<groupId>", "group-<email>", "domain-<domain>" and "project-team-<projectId>".

Or one of the predefined constants: AllUsers, AllAuthenticatedUsers.

type ACLEntity string
const (
    AllUsers              ACLEntity = "allUsers"
    AllAuthenticatedUsers ACLEntity = "allAuthenticatedUsers"
)

type ACLHandle

ACLHandle provides operations on an access control list for a Google Cloud Storage bucket or object.

type ACLHandle struct {
    // contains filtered or unexported fields
}

func (*ACLHandle) Delete

func (a *ACLHandle) Delete(ctx context.Context, entity ACLEntity) error

Delete permanently deletes the ACL entry for the given entity.

Example

Code:

ctx := context.Background()
client, err := storage.NewClient(ctx)
if err != nil {
    // TODO: handle error.
}
// No longer grant access to the bucket to everyone on the Internet.
if err := client.Bucket("my-bucket").ACL().Delete(ctx, storage.AllUsers); err != nil {
    // TODO: handle error.
}

func (*ACLHandle) List

func (a *ACLHandle) List(ctx context.Context) ([]ACLRule, error)

List retrieves ACL entries.

Example

Code:

ctx := context.Background()
client, err := storage.NewClient(ctx)
if err != nil {
    // TODO: handle error.
}
// List the default object ACLs for my-bucket.
aclRules, err := client.Bucket("my-bucket").DefaultObjectACL().List(ctx)
if err != nil {
    // TODO: handle error.
}
fmt.Println(aclRules)

func (*ACLHandle) Set

func (a *ACLHandle) Set(ctx context.Context, entity ACLEntity, role ACLRole) error

Set sets the permission level for the given entity.

Example

Code:

ctx := context.Background()
client, err := storage.NewClient(ctx)
if err != nil {
    // TODO: handle error.
}
// Let any authenticated user read my-bucket/my-object.
obj := client.Bucket("my-bucket").Object("my-object")
if err := obj.ACL().Set(ctx, storage.AllAuthenticatedUsers, storage.RoleReader); err != nil {
    // TODO: handle error.
}

type ACLRole

ACLRole is the level of access to grant.

type ACLRole string
const (
    RoleOwner  ACLRole = "OWNER"
    RoleReader ACLRole = "READER"
    RoleWriter ACLRole = "WRITER"
)

type ACLRule

ACLRule represents a grant for a role to an entity (user, group or team) for a Google Cloud Storage object or bucket.

type ACLRule struct {
    Entity ACLEntity
    Role   ACLRole
}

type BucketAttrs

BucketAttrs represents the metadata for a Google Cloud Storage bucket.

type BucketAttrs struct {
    // Name is the name of the bucket.
    Name string

    // ACL is the list of access control rules on the bucket.
    ACL []ACLRule

    // DefaultObjectACL is the list of access controls to
    // apply to new objects when no object ACL is provided.
    DefaultObjectACL []ACLRule

    // Location is the location of the bucket. It defaults to "US".
    Location string

    // MetaGeneration is the metadata generation of the bucket.
    MetaGeneration int64

    // StorageClass is the default storage class of the bucket. This defines
    // how objects in the bucket are stored and determines the SLA
    // and the cost of storage. Typical values are "MULTI_REGIONAL",
    // "REGIONAL", "NEARLINE", "COLDLINE", "STANDARD" and
    // "DURABLE_REDUCED_AVAILABILITY". Defaults to "STANDARD", which
    // is equivalent to "MULTI_REGIONAL" or "REGIONAL" depending on
    // the bucket's location settings.
    StorageClass string

    // Created is the creation time of the bucket.
    Created time.Time

    // VersioningEnabled reports whether this bucket has versioning enabled.
    // This field is read-only.
    VersioningEnabled bool

    // Labels are the bucket's labels.
    Labels map[string]string
}

type BucketAttrsToUpdate

type BucketAttrsToUpdate struct {
    // VersioningEnabled, if set, updates whether the bucket uses versioning.
    VersioningEnabled optional.Bool
    // contains filtered or unexported fields
}

func (*BucketAttrsToUpdate) DeleteLabel

func (ua *BucketAttrsToUpdate) DeleteLabel(name string)

DeleteLabel causes a label to be deleted when ua is used in a call to Bucket.Update.

func (*BucketAttrsToUpdate) SetLabel

func (ua *BucketAttrsToUpdate) SetLabel(name, value string)

SetLabel causes a label to be added or modified when ua is used in a call to Bucket.Update.

type BucketConditions

BucketConditions constrain bucket methods to act on specific metagenerations.

The zero value is an empty set of constraints.

type BucketConditions struct {
    // MetagenerationMatch specifies that the bucket must have the given
    // metageneration for the operation to occur.
    // If MetagenerationMatch is zero, it has no effect.
    MetagenerationMatch int64

    // MetagenerationNotMatch specifies that the bucket must not have the given
    // metageneration for the operation to occur.
    // If MetagenerationNotMatch is zero, it has no effect.
    MetagenerationNotMatch int64
}

type BucketHandle

BucketHandle provides operations on a Google Cloud Storage bucket. Use Client.Bucket to get a handle.

type BucketHandle struct {
    // contains filtered or unexported fields
}

func (*BucketHandle) ACL

func (b *BucketHandle) ACL() *ACLHandle

ACL returns an ACLHandle, which provides access to the bucket's access control list. This controls who can list, create or overwrite the objects in a bucket. This call does not perform any network operations.

func (*BucketHandle) Attrs

func (b *BucketHandle) Attrs(ctx context.Context) (*BucketAttrs, error)

Attrs returns the metadata for the bucket.

Example

Code:

ctx := context.Background()
client, err := storage.NewClient(ctx)
if err != nil {
    // TODO: handle error.
}
attrs, err := client.Bucket("my-bucket").Attrs(ctx)
if err != nil {
    // TODO: handle error.
}
fmt.Println(attrs)

func (*BucketHandle) Create

func (b *BucketHandle) Create(ctx context.Context, projectID string, attrs *BucketAttrs) error

Create creates the Bucket in the project. If attrs is nil the API defaults will be used.

Example

Code:

ctx := context.Background()
client, err := storage.NewClient(ctx)
if err != nil {
    // TODO: handle error.
}
if err := client.Bucket("my-bucket").Create(ctx, "my-project", nil); err != nil {
    // TODO: handle error.
}

func (*BucketHandle) DefaultObjectACL

func (b *BucketHandle) DefaultObjectACL() *ACLHandle

DefaultObjectACL returns an ACLHandle, which provides access to the bucket's default object ACLs. These ACLs are applied to newly created objects in this bucket that do not have a defined ACL. This call does not perform any network operations.

func (*BucketHandle) Delete

func (b *BucketHandle) Delete(ctx context.Context) error

Delete deletes the Bucket.

Example

Code:

ctx := context.Background()
client, err := storage.NewClient(ctx)
if err != nil {
    // TODO: handle error.
}
if err := client.Bucket("my-bucket").Delete(ctx); err != nil {
    // TODO: handle error.
}

func (*BucketHandle) IAM

func (b *BucketHandle) IAM() *iam.Handle

IAM provides access to IAM access control for the bucket.

func (*BucketHandle) If

func (b *BucketHandle) If(conds BucketConditions) *BucketHandle

If returns a new BucketHandle that applies a set of preconditions. Preconditions already set on the BucketHandle are ignored. Operations on the new handle will only occur if the preconditions are satisfied. The only valid preconditions for buckets are MetagenerationMatch and MetagenerationNotMatch.

func (*BucketHandle) Object

func (b *BucketHandle) Object(name string) *ObjectHandle

Object returns an ObjectHandle, which provides operations on the named object. This call does not perform any network operations.

name must consist entirely of valid UTF-8-encoded runes. The full specification for valid object names can be found at:

https://cloud.google.com/storage/docs/bucket-naming

func (*BucketHandle) Objects

func (b *BucketHandle) Objects(ctx context.Context, q *Query) *ObjectIterator

Objects returns an iterator over the objects in the bucket that match the Query q. If q is nil, no filtering is done.

Example

Code:

ctx := context.Background()
client, err := storage.NewClient(ctx)
if err != nil {
    // TODO: handle error.
}
it := client.Bucket("my-bucket").Objects(ctx, nil)
_ = it // TODO: iterate using Next or iterator.Pager.

func (*BucketHandle) Update

func (b *BucketHandle) Update(ctx context.Context, uattrs BucketAttrsToUpdate) (*BucketAttrs, error)

Example

Code:

ctx := context.Background()
client, err := storage.NewClient(ctx)
if err != nil {
    // TODO: handle error.
}
// Enable versioning in the bucket, regardless of its previous value.
attrs, err := client.Bucket("my-bucket").Update(ctx,
    storage.BucketAttrsToUpdate{VersioningEnabled: true})
if err != nil {
    // TODO: handle error.
}
fmt.Println(attrs)

Example (ReadModifyWrite)

If your update is based on the bucket's previous attributes, match the metageneration number to make sure the bucket hasn't changed since you read it.

Code:

ctx := context.Background()
client, err := storage.NewClient(ctx)
if err != nil {
    // TODO: handle error.
}
b := client.Bucket("my-bucket")
attrs, err := b.Attrs(ctx)
if err != nil {
    // TODO: handle error.
}
var au storage.BucketAttrsToUpdate
au.SetLabel("lab", attrs.Labels["lab"]+"-more")
if attrs.Labels["delete-me"] == "yes" {
    au.DeleteLabel("delete-me")
}
attrs, err = b.
    If(storage.BucketConditions{MetagenerationMatch: attrs.MetaGeneration}).
    Update(ctx, au)
if err != nil {
    // TODO: handle error.
}
fmt.Println(attrs)

type BucketIterator

A BucketIterator is an iterator over BucketAttrs.

type BucketIterator struct {
    // Prefix restricts the iterator to buckets whose names begin with it.
    Prefix string
    // contains filtered or unexported fields
}

func (*BucketIterator) Next

func (it *BucketIterator) Next() (*BucketAttrs, error)

Next returns the next result. Its second return value is iterator.Done if there are no more results. Once Next returns iterator.Done, all subsequent calls will return iterator.Done.

Example

Code:

ctx := context.Background()
client, err := storage.NewClient(ctx)
if err != nil {
    // TODO: handle error.
}
it := client.Buckets(ctx, "my-project")
for {
    bucketAttrs, err := it.Next()
    if err == iterator.Done {
        break
    }
    if err != nil {
        // TODO: Handle error.
    }
    fmt.Println(bucketAttrs)
}

func (*BucketIterator) PageInfo

func (it *BucketIterator) PageInfo() *iterator.PageInfo

PageInfo supports pagination. See the google.golang.org/api/iterator package for details.

type Client

Client is a client for interacting with Google Cloud Storage.

Clients should be reused instead of created as needed. The methods of Client are safe for concurrent use by multiple goroutines.

type Client struct {
    // contains filtered or unexported fields
}

func NewClient

func NewClient(ctx context.Context, opts ...option.ClientOption) (*Client, error)

NewClient creates a new Google Cloud Storage client. The default scope is ScopeFullControl. To use a different scope, like ScopeReadOnly, use option.WithScopes.

Example

Code:

ctx := context.Background()
client, err := storage.NewClient(ctx)
if err != nil {
    // TODO: handle error.
}
// Use the client.

// Close the client when finished.
if err := client.Close(); err != nil {
    // TODO: handle error.
}

Example (Auth)

Code:

ctx := context.Background()
// Use Google Application Default Credentials to authorize and authenticate the client.
// More information about Application Default Credentials and how to enable is at
// https://developers.google.com/identity/protocols/application-default-credentials.
client, err := storage.NewClient(ctx)
if err != nil {
    log.Fatal(err)
}

// Use the client.

// Close the client when finished.
if err := client.Close(); err != nil {
    log.Fatal(err)
}

func (*Client) Bucket

func (c *Client) Bucket(name string) *BucketHandle

Bucket returns a BucketHandle, which provides operations on the named bucket. This call does not perform any network operations.

The supplied name must contain only lowercase letters, numbers, dashes, underscores, and dots. The full specification for valid bucket names can be found at:

https://cloud.google.com/storage/docs/bucket-naming

func (*Client) Buckets

func (c *Client) Buckets(ctx context.Context, projectID string) *BucketIterator

Buckets returns an iterator over the buckets in the project. You may optionally set the iterator's Prefix field to restrict the list to buckets whose names begin with the prefix. By default, all buckets in the project are returned.

Example

Code:

ctx := context.Background()
client, err := storage.NewClient(ctx)
if err != nil {
    // TODO: handle error.
}
it := client.Bucket("my-bucket")
_ = it // TODO: iterate using Next or iterator.Pager.

func (*Client) Close

func (c *Client) Close() error

Close closes the Client.

Close need not be called at program exit.

type Composer

A Composer composes source objects into a destination object.

type Composer struct {
    // ObjectAttrs are optional attributes to set on the destination object.
    // Any attributes must be initialized before any calls on the Composer. Nil
    // or zero-valued attributes are ignored.
    ObjectAttrs
    // contains filtered or unexported fields
}

func (*Composer) Run

func (c *Composer) Run(ctx context.Context) (*ObjectAttrs, error)

Run performs the compose operation.

Example

Code:

ctx := context.Background()
client, err := storage.NewClient(ctx)
if err != nil {
    // TODO: handle error.
}
bkt := client.Bucket("bucketname")
src1 := bkt.Object("o1")
src2 := bkt.Object("o2")
dst := bkt.Object("o3")
// Compose and modify metadata.
c := dst.ComposerFrom(src1, src2)
c.ContentType = "text/plain"
attrs, err := c.Run(ctx)
if err != nil {
    // TODO: Handle error.
}
fmt.Println(attrs)
// Just compose.
attrs, err = dst.ComposerFrom(src1, src2).Run(ctx)
if err != nil {
    // TODO: Handle error.
}
fmt.Println(attrs)

type Conditions

Conditions constrain methods to act on specific generations of objects.

The zero value is an empty set of constraints. Not all conditions or combinations of conditions are applicable to all methods. See https://cloud.google.com/storage/docs/generations-preconditions for details on how these operate.

type Conditions struct {

    // GenerationMatch specifies that the object must have the given generation
    // for the operation to occur.
    // If GenerationMatch is zero, it has no effect.
    // Use DoesNotExist to specify that the object does not exist in the bucket.
    GenerationMatch int64

    // GenerationNotMatch specifies that the object must not have the given
    // generation for the operation to occur.
    // If GenerationNotMatch is zero, it has no effect.
    GenerationNotMatch int64

    // DoesNotExist specifies that the object must not exist in the bucket for
    // the operation to occur.
    // If DoesNotExist is false, it has no effect.
    DoesNotExist bool

    // MetagenerationMatch specifies that the object must have the given
    // metageneration for the operation to occur.
    // If MetagenerationMatch is zero, it has no effect.
    MetagenerationMatch int64

    // MetagenerationNotMatch specifies that the object must not have the given
    // metageneration for the operation to occur.
    // If MetagenerationNotMatch is zero, it has no effect.
    MetagenerationNotMatch int64
}

type Copier

A Copier copies a source object to a destination.

type Copier struct {
    // ObjectAttrs are optional attributes to set on the destination object.
    // Any attributes must be initialized before any calls on the Copier. Nil
    // or zero-valued attributes are ignored.
    ObjectAttrs

    // RewriteToken can be set before calling Run to resume a copy
    // operation. After Run returns a non-nil error, RewriteToken will
    // have been updated to contain the value needed to resume the copy.
    RewriteToken string

    // ProgressFunc can be used to monitor the progress of a multi-RPC copy
    // operation. If ProgressFunc is not nil and CopyFrom requires multiple
    // calls to the underlying service (see
    // https://cloud.google.com/storage/docs/json_api/v1/objects/rewrite), then
    // ProgressFunc will be invoked after each call with the number of bytes of
    // content copied so far and the total size in bytes of the source object.
    //
    // ProgressFunc is intended to make upload progress available to the
    // application. For example, the implementation of ProgressFunc may update
    // a progress bar in the application's UI, or log the result of
    // float64(copiedBytes)/float64(totalBytes).
    //
    // ProgressFunc should return quickly without blocking.
    ProgressFunc func(copiedBytes, totalBytes uint64)
    // contains filtered or unexported fields
}

func (*Copier) Run

func (c *Copier) Run(ctx context.Context) (*ObjectAttrs, error)

Run performs the copy.

Example

Code:

ctx := context.Background()
client, err := storage.NewClient(ctx)
if err != nil {
    // TODO: handle error.
}
src := client.Bucket("bucketname").Object("file1")
dst := client.Bucket("another-bucketname").Object("file2")

// Copy content and modify metadata.
copier := dst.CopierFrom(src)
copier.ContentType = "text/plain"
attrs, err := copier.Run(ctx)
if err != nil {
    // TODO: Handle error, possibly resuming with copier.RewriteToken.
}
fmt.Println(attrs)

// Just copy content.
attrs, err = dst.CopierFrom(src).Run(ctx)
if err != nil {
    // TODO: Handle error. No way to resume.
}
fmt.Println(attrs)

Example (Progress)

Code:

// Display progress across multiple rewrite RPCs.
ctx := context.Background()
client, err := storage.NewClient(ctx)
if err != nil {
    // TODO: handle error.
}
src := client.Bucket("bucketname").Object("file1")
dst := client.Bucket("another-bucketname").Object("file2")

copier := dst.CopierFrom(src)
copier.ProgressFunc = func(copiedBytes, totalBytes uint64) {
    log.Printf("copy %.1f%% done", float64(copiedBytes)/float64(totalBytes)*100)
}
if _, err := copier.Run(ctx); err != nil {
    // TODO: handle error.
}

type ObjectAttrs

ObjectAttrs represents the metadata for a Google Cloud Storage (GCS) object.

type ObjectAttrs struct {
    // Bucket is the name of the bucket containing this GCS object.
    // This field is read-only.
    Bucket string

    // Name is the name of the object within the bucket.
    // This field is read-only.
    Name string

    // ContentType is the MIME type of the object's content.
    ContentType string

    // ContentLanguage is the content language of the object's content.
    ContentLanguage string

    // CacheControl is the Cache-Control header to be sent in the response
    // headers when serving the object data.
    CacheControl string

    // ACL is the list of access control rules for the object.
    ACL []ACLRule

    // Owner is the owner of the object. This field is read-only.
    //
    // If non-zero, it is in the form of "user-<userId>".
    Owner string

    // Size is the length of the object's content. This field is read-only.
    Size int64

    // ContentEncoding is the encoding of the object's content.
    ContentEncoding string

    // ContentDisposition is the optional Content-Disposition header of the object
    // sent in the response headers.
    ContentDisposition string

    // MD5 is the MD5 hash of the object's content. This field is read-only.
    MD5 []byte

    // CRC32C is the CRC32 checksum of the object's content using
    // the Castagnoli93 polynomial. This field is read-only.
    CRC32C uint32

    // MediaLink is an URL to the object's content. This field is read-only.
    MediaLink string

    // Metadata represents user-provided metadata, in key/value pairs.
    // It can be nil if no metadata is provided.
    Metadata map[string]string

    // Generation is the generation number of the object's content.
    // This field is read-only.
    Generation int64

    // Metageneration is the version of the metadata for this
    // object at this generation. This field is used for preconditions
    // and for detecting changes in metadata. A metageneration number
    // is only meaningful in the context of a particular generation
    // of a particular object. This field is read-only.
    Metageneration int64

    // StorageClass is the storage class of the object.
    // This value defines how objects in the bucket are stored and
    // determines the SLA and the cost of storage. Typical values are
    // "MULTI_REGIONAL", "REGIONAL", "NEARLINE", "COLDLINE", "STANDARD"
    // and "DURABLE_REDUCED_AVAILABILITY".
    // It defaults to "STANDARD", which is equivalent to "MULTI_REGIONAL"
    // or "REGIONAL" depending on the bucket's location settings.
    StorageClass string

    // Created is the time the object was created. This field is read-only.
    Created time.Time

    // Deleted is the time the object was deleted.
    // If not deleted, it is the zero value. This field is read-only.
    Deleted time.Time

    // Updated is the creation or modification time of the object.
    // For buckets with versioning enabled, changing an object's
    // metadata does not change this property. This field is read-only.
    Updated time.Time

    // CustomerKeySHA256 is the base64-encoded SHA-256 hash of the
    // customer-supplied encryption key for the object. It is empty if there is
    // no customer-supplied encryption key.
    // See // https://cloud.google.com/storage/docs/encryption for more about
    // encryption in Google Cloud Storage.
    CustomerKeySHA256 string

    // Prefix is set only for ObjectAttrs which represent synthetic "directory
    // entries" when iterating over buckets using Query.Delimiter. See
    // ObjectIterator.Next. When set, no other fields in ObjectAttrs will be
    // populated.
    Prefix string
}

type ObjectAttrsToUpdate

ObjectAttrsToUpdate is used to update the attributes of an object. Only fields set to non-nil values will be updated. Set a field to its zero value to delete it.

For example, to change ContentType and delete ContentEncoding and Metadata, use

ObjectAttrsToUpdate{
    ContentType: "text/html",
    ContentEncoding: "",
    Metadata: map[string]string{},
}
type ObjectAttrsToUpdate struct {
    ContentType        optional.String
    ContentLanguage    optional.String
    ContentEncoding    optional.String
    ContentDisposition optional.String
    CacheControl       optional.String
    Metadata           map[string]string // set to map[string]string{} to delete
    ACL                []ACLRule
}

type ObjectHandle

ObjectHandle provides operations on an object in a Google Cloud Storage bucket. Use BucketHandle.Object to get a handle.

type ObjectHandle struct {
    // contains filtered or unexported fields
}

func (*ObjectHandle) ACL

func (o *ObjectHandle) ACL() *ACLHandle

ACL provides access to the object's access control list. This controls who can read and write this object. This call does not perform any network operations.

func (*ObjectHandle) Attrs

func (o *ObjectHandle) Attrs(ctx context.Context) (*ObjectAttrs, error)

Attrs returns meta information about the object. ErrObjectNotExist will be returned if the object is not found.

Example

Code:

ctx := context.Background()
client, err := storage.NewClient(ctx)
if err != nil {
    // TODO: handle error.
}
objAttrs, err := client.Bucket("my-bucket").Object("my-object").Attrs(ctx)
if err != nil {
    // TODO: handle error.
}
fmt.Println(objAttrs)

Example (WithConditions)

Code:

ctx := context.Background()
client, err := storage.NewClient(ctx)
if err != nil {
    // TODO: handle error.
}
obj := client.Bucket("my-bucket").Object("my-object")
// Read the object.
objAttrs1, err := obj.Attrs(ctx)
if err != nil {
    // TODO: handle error.
}
// Do something else for a while.
time.Sleep(5 * time.Minute)
// Now read the same contents, even if the object has been written since the last read.
objAttrs2, err := obj.Generation(objAttrs1.Generation).Attrs(ctx)
if err != nil {
    // TODO: handle error.
}
fmt.Println(objAttrs1, objAttrs2)

func (*ObjectHandle) ComposerFrom

func (dst *ObjectHandle) ComposerFrom(srcs ...*ObjectHandle) *Composer

ComposerFrom creates a Composer that can compose srcs into dst. You can immediately call Run on the returned Composer, or you can configure it first.

The encryption key for the destination object will be used to decrypt all source objects and encrypt the destination object. It is an error to specify an encryption key for any of the source objects.

func (*ObjectHandle) CopierFrom

func (dst *ObjectHandle) CopierFrom(src *ObjectHandle) *Copier

CopierFrom creates a Copier that can copy src to dst. You can immediately call Run on the returned Copier, or you can configure it first.

Example (RotateEncryptionKeys)

Code:

// To rotate the encryption key on an object, copy it onto itself.
ctx := context.Background()
client, err := storage.NewClient(ctx)
if err != nil {
    // TODO: handle error.
}
obj := client.Bucket("bucketname").Object("obj")
// Assume obj is encrypted with key1, and we want to change to key2.
_, err = obj.Key(key2).CopierFrom(obj.Key(key1)).Run(ctx)
if err != nil {
    // TODO: handle error.
}

func (*ObjectHandle) Delete

func (o *ObjectHandle) Delete(ctx context.Context) error

Delete deletes the single specified object.

Example

Code:

ctx := context.Background()
client, err := storage.NewClient(ctx)
if err != nil {
    // TODO: handle error.
}
// To delete multiple objects in a bucket, list them with an
// ObjectIterator, then Delete them.

// If you are using this package on the App Engine Flex runtime,
// you can init a bucket client with your app's default bucket name.
// See http://godoc.org/google.golang.org/appengine/file#DefaultBucketName.
bucket := client.Bucket("my-bucket")
it := bucket.Objects(ctx, nil)
for {
    objAttrs, err := it.Next()
    if err != nil && err != iterator.Done {
        // TODO: Handle error.
    }
    if err == iterator.Done {
        break
    }
    if err := bucket.Object(objAttrs.Name).Delete(ctx); err != nil {
        // TODO: Handle error.
    }
}
fmt.Println("deleted all object items in the bucket specified.")

func (*ObjectHandle) Generation

func (o *ObjectHandle) Generation(gen int64) *ObjectHandle

Generation returns a new ObjectHandle that operates on a specific generation of the object. By default, the handle operates on the latest generation. Not all operations work when given a specific generation; check the API endpoints at https://cloud.google.com/storage/docs/json_api/ for details.

Example

Code:

// Read an object's contents from generation gen, regardless of the
// current generation of the object.
ctx := context.Background()
client, err := storage.NewClient(ctx)
if err != nil {
    // TODO: handle error.
}
obj := client.Bucket("my-bucket").Object("my-object")
rc, err := obj.Generation(gen).NewReader(ctx)
if err != nil {
    // TODO: handle error.
}
defer rc.Close()
if _, err := io.Copy(os.Stdout, rc); err != nil {
    // TODO: handle error.
}

func (*ObjectHandle) If

func (o *ObjectHandle) If(conds Conditions) *ObjectHandle

If returns a new ObjectHandle that applies a set of preconditions. Preconditions already set on the ObjectHandle are ignored. Operations on the new handle will only occur if the preconditions are satisfied. See https://cloud.google.com/storage/docs/generations-preconditions for more details.

Example

Code:

// Read from an object only if the current generation is gen.
ctx := context.Background()
client, err := storage.NewClient(ctx)
if err != nil {
    // TODO: handle error.
}
obj := client.Bucket("my-bucket").Object("my-object")
rc, err := obj.If(storage.Conditions{GenerationMatch: gen}).NewReader(ctx)
if err != nil {
    // TODO: handle error.
}
defer rc.Close()
if _, err := io.Copy(os.Stdout, rc); err != nil {
    // TODO: handle error.
}

func (*ObjectHandle) Key

func (o *ObjectHandle) Key(encryptionKey []byte) *ObjectHandle

Key returns a new ObjectHandle that uses the supplied encryption key to encrypt and decrypt the object's contents.

Encryption key must be a 32-byte AES-256 key. See https://cloud.google.com/storage/docs/encryption for details.

Example

Code:

ctx := context.Background()
client, err := storage.NewClient(ctx)
if err != nil {
    // TODO: handle error.
}
obj := client.Bucket("my-bucket").Object("my-object")
// Encrypt the object's contents.
w := obj.Key(secretKey).NewWriter(ctx)
if _, err := w.Write([]byte("top secret")); err != nil {
    // TODO: handle error.
}
if err := w.Close(); err != nil {
    // TODO: handle error.
}

func (*ObjectHandle) NewRangeReader

func (o *ObjectHandle) NewRangeReader(ctx context.Context, offset, length int64) (*Reader, error)

NewRangeReader reads part of an object, reading at most length bytes starting at the given offset. If length is negative, the object is read until the end.

Example

Code:

ctx := context.Background()
client, err := storage.NewClient(ctx)
if err != nil {
    // TODO: handle error.
}
// Read only the first 64K.
rc, err := client.Bucket("bucketname").Object("filename1").NewRangeReader(ctx, 0, 64*1024)
if err != nil {
    // TODO: handle error.
}
slurp, err := ioutil.ReadAll(rc)
rc.Close()
if err != nil {
    // TODO: handle error.
}
fmt.Println("first 64K of file contents:", slurp)

func (*ObjectHandle) NewReader

func (o *ObjectHandle) NewReader(ctx context.Context) (*Reader, error)

NewReader creates a new Reader to read the contents of the object. ErrObjectNotExist will be returned if the object is not found.

The caller must call Close on the returned Reader when done reading.

Example

Code:

ctx := context.Background()
client, err := storage.NewClient(ctx)
if err != nil {
    // TODO: handle error.
}
rc, err := client.Bucket("my-bucket").Object("my-object").NewReader(ctx)
if err != nil {
    // TODO: handle error.
}
slurp, err := ioutil.ReadAll(rc)
rc.Close()
if err != nil {
    // TODO: handle error.
}
fmt.Println("file contents:", slurp)

func (*ObjectHandle) NewWriter

func (o *ObjectHandle) NewWriter(ctx context.Context) *Writer

NewWriter returns a storage Writer that writes to the GCS object associated with this ObjectHandle.

A new object will be created unless an object with this name already exists. Otherwise any previous object with the same name will be replaced. The object will not be available (and any previous object will remain) until Close has been called.

Attributes can be set on the object by modifying the returned Writer's ObjectAttrs field before the first call to Write. If no ContentType attribute is specified, the content type will be automatically sniffed using net/http.DetectContentType.

It is the caller's responsibility to call Close when writing is done.

Example

Code:

ctx := context.Background()
client, err := storage.NewClient(ctx)
if err != nil {
    // TODO: handle error.
}
wc := client.Bucket("bucketname").Object("filename1").NewWriter(ctx)
_ = wc // TODO: Use the Writer.

func (*ObjectHandle) Update

func (o *ObjectHandle) Update(ctx context.Context, uattrs ObjectAttrsToUpdate) (*ObjectAttrs, error)

Update updates an object with the provided attributes. All zero-value attributes are ignored. ErrObjectNotExist will be returned if the object is not found.

Example

Code:

ctx := context.Background()
client, err := storage.NewClient(ctx)
if err != nil {
    // TODO: handle error.
}
// Change only the content type of the object.
objAttrs, err := client.Bucket("my-bucket").Object("my-object").Update(ctx, storage.ObjectAttrsToUpdate{
    ContentType:        "text/html",
    ContentDisposition: "", // delete ContentDisposition
})
if err != nil {
    // TODO: handle error.
}
fmt.Println(objAttrs)

type ObjectIterator

An ObjectIterator is an iterator over ObjectAttrs.

type ObjectIterator struct {
    // contains filtered or unexported fields
}

func (*ObjectIterator) Next

func (it *ObjectIterator) Next() (*ObjectAttrs, error)

Next returns the next result. Its second return value is iterator.Done if there are no more results. Once Next returns iterator.Done, all subsequent calls will return iterator.Done.

If Query.Delimiter is non-empty, some of the ObjectAttrs returned by Next will have a non-empty Prefix field, and a zero value for all other fields. These represent prefixes.

Example

Code:

ctx := context.Background()
client, err := storage.NewClient(ctx)
if err != nil {
    // TODO: handle error.
}
it := client.Bucket("my-bucket").Objects(ctx, nil)
for {
    objAttrs, err := it.Next()
    if err == iterator.Done {
        break
    }
    if err != nil {
        // TODO: Handle error.
    }
    fmt.Println(objAttrs)
}

func (*ObjectIterator) PageInfo

func (it *ObjectIterator) PageInfo() *iterator.PageInfo

PageInfo supports pagination. See the google.golang.org/api/iterator package for details.

type Query

Query represents a query to filter objects from a bucket.

type Query struct {
    // Delimiter returns results in a directory-like fashion.
    // Results will contain only objects whose names, aside from the
    // prefix, do not contain delimiter. Objects whose names,
    // aside from the prefix, contain delimiter will have their name,
    // truncated after the delimiter, returned in prefixes.
    // Duplicate prefixes are omitted.
    // Optional.
    Delimiter string

    // Prefix is the prefix filter to query objects
    // whose names begin with this prefix.
    // Optional.
    Prefix string

    // Versions indicates whether multiple versions of the same
    // object will be included in the results.
    Versions bool
}

type Reader

Reader reads a Cloud Storage object. It implements io.Reader.

type Reader struct {
    // contains filtered or unexported fields
}

func (*Reader) Close

func (r *Reader) Close() error

Close closes the Reader. It must be called when done reading.

func (*Reader) ContentType

func (r *Reader) ContentType() string

ContentType returns the content type of the object.

func (*Reader) Read

func (r *Reader) Read(p []byte) (int, error)

func (*Reader) Remain

func (r *Reader) Remain() int64

Remain returns the number of bytes left to read, or -1 if unknown.

func (*Reader) Size

func (r *Reader) Size() int64

Size returns the size of the object in bytes. The returned value is always the same and is not affected by calls to Read or Close.

type SignedURLOptions

SignedURLOptions allows you to restrict the access to the signed URL.

type SignedURLOptions struct {
    // GoogleAccessID represents the authorizer of the signed URL generation.
    // It is typically the Google service account client email address from
    // the Google Developers Console in the form of "xxx@developer.gserviceaccount.com".
    // Required.
    GoogleAccessID string

    // PrivateKey is the Google service account private key. It is obtainable
    // from the Google Developers Console.
    // At https://console.developers.google.com/project/<your-project-id>/apiui/credential,
    // create a service account client ID or reuse one of your existing service account
    // credentials. Click on the "Generate new P12 key" to generate and download
    // a new private key. Once you download the P12 file, use the following command
    // to convert it into a PEM file.
    //
    //    $ openssl pkcs12 -in key.p12 -passin pass:notasecret -out key.pem -nodes
    //
    // Provide the contents of the PEM file as a byte slice.
    // Exactly one of PrivateKey or SignBytes must be non-nil.
    PrivateKey []byte

    // SignBytes is a function for implementing custom signing.
    // If your application is running on Google App Engine, you can use appengine's internal signing function:
    //     ctx := appengine.NewContext(request)
    //     acc, _ := appengine.ServiceAccount(ctx)
    //     url, err := SignedURL("bucket", "object", &SignedURLOptions{
    //     	GoogleAccessID: acc,
    //     	SignBytes: func(b []byte) ([]byte, error) {
    //     		_, signedBytes, err := appengine.SignBytes(ctx, b)
    //     		return signedBytes, err
    //     	},
    //     	// etc.
    //     })
    //
    // Exactly one of PrivateKey or SignBytes must be non-nil.
    SignBytes func([]byte) ([]byte, error)

    // Method is the HTTP method to be used with the signed URL.
    // Signed URLs can be used with GET, HEAD, PUT, and DELETE requests.
    // Required.
    Method string

    // Expires is the expiration time on the signed URL. It must be
    // a datetime in the future.
    // Required.
    Expires time.Time

    // ContentType is the content type header the client must provide
    // to use the generated signed URL.
    // Optional.
    ContentType string

    // Headers is a list of extention headers the client must provide
    // in order to use the generated signed URL.
    // Optional.
    Headers []string

    // MD5 is the base64 encoded MD5 checksum of the file.
    // If provided, the client should provide the exact value on the request
    // header in order to use the signed URL.
    // Optional.
    MD5 string
}

type Writer

A Writer writes a Cloud Storage object.

type Writer struct {
    // ObjectAttrs are optional attributes to set on the object. Any attributes
    // must be initialized before the first Write call. Nil or zero-valued
    // attributes are ignored.
    ObjectAttrs

    // SendCRC specifies whether to transmit a CRC32C field. It should be set
    // to true in addition to setting the Writer's CRC32C field, because zero
    // is a valid CRC and normally a zero would not be transmitted.
    SendCRC32C bool

    // ChunkSize controls the maximum number of bytes of the object that the
    // Writer will attempt to send to the server in a single request. Objects
    // smaller than the size will be sent in a single request, while larger
    // objects will be split over multiple requests. The size will be rounded up
    // to the nearest multiple of 256K. If zero, chunking will be disabled and
    // the object will be uploaded in a single request.
    //
    // ChunkSize will default to a reasonable value. Any custom configuration
    // must be done before the first Write call.
    ChunkSize int

    // ProgressFunc can be used to monitor the progress of a large write.
    // operation. If ProgressFunc is not nil and writing requires multiple
    // calls to the underlying service (see
    // https://cloud.google.com/storage/docs/json_api/v1/how-tos/resumable-upload),
    // then ProgressFunc will be invoked after each call with the number of bytes of
    // content copied so far.
    //
    // ProgressFunc should return quickly without blocking.
    ProgressFunc func(int64)
    // contains filtered or unexported fields
}

func (*Writer) Attrs

func (w *Writer) Attrs() *ObjectAttrs

Attrs returns metadata about a successfully-written object. It's only valid to call it after Close returns nil.

func (*Writer) Close

func (w *Writer) Close() error

Close completes the write operation and flushes any buffered data. If Close doesn't return an error, metadata about the written object can be retrieved by calling Attrs.

func (*Writer) CloseWithError

func (w *Writer) CloseWithError(err error) error

CloseWithError aborts the write operation with the provided error. CloseWithError always returns nil.

func (*Writer) Write

func (w *Writer) Write(p []byte) (n int, err error)

Write appends to w. It implements the io.Writer interface.

Since writes happen asynchronously, Write may return a nil error even though the write failed (or will fail). Always use the error returned from Writer.Close to determine if the upload was successful.

Example

Code:

ctx := context.Background()
client, err := storage.NewClient(ctx)
if err != nil {
    // TODO: handle error.
}
wc := client.Bucket("bucketname").Object("filename1").NewWriter(ctx)
wc.ContentType = "text/plain"
wc.ACL = []storage.ACLRule{{storage.AllUsers, storage.RoleReader}}
if _, err := wc.Write([]byte("hello world")); err != nil {
    // TODO: handle error.
}
if err := wc.Close(); err != nil {
    // TODO: handle error.
}
fmt.Println("updated object:", wc.Attrs())