Package spanner
Overview ▹
Index ▹
Constants
const ( // Scope is the scope for Cloud Spanner Data API. Scope = "https://www.googleapis.com/auth/spanner.data" // AdminScope is the scope for Cloud Spanner Admin APIs. AdminScope = "https://www.googleapis.com/auth/spanner.admin" )
func ErrCode ¶
func ErrCode(err error) codes.Code
ErrCode extracts the canonical error code from a Go error.
func ErrDesc ¶
func ErrDesc(err error) string
ErrDesc extracts the Cloud Spanner error description from a Go error.
type ApplyOption ¶
An ApplyOption is an optional argument to Apply.
type ApplyOption func(*applyOption)
func ApplyAtLeastOnce ¶
func ApplyAtLeastOnce() ApplyOption
ApplyAtLeastOnce returns an ApplyOption that removes replay protection.
With this option, Apply may attempt to apply mutations more than once; if the mutations are not idempotent, this may lead to a failure being reported when the mutation was applied more than once. For example, an insert may fail with ALREADY_EXISTS even though the row did not exist before Apply was called. For this reason, most users of the library will prefer not to use this option. However, ApplyAtLeastOnce requires only a single RPC, whereas Apply's default replay protection may require an additional RPC. So this option may be appropriate for latency sensitive and/or high throughput blind writing.
type Client ¶
Client is a client for reading and writing data to a Cloud Spanner database. A client is safe to use concurrently, except for its Close method.
type Client struct {
// contains filtered or unexported fields
}
func NewClient ¶
func NewClient(ctx context.Context, database string, opts ...option.ClientOption) (*Client, error)
NewClient creates a client to a database. A valid database name has the form projects/PROJECT_ID/instances/INSTANCE_ID/databases/DATABASE_ID. It uses a default configuration.
▹ Example
func NewClientWithConfig ¶
func NewClientWithConfig(ctx context.Context, database string, config ClientConfig, opts ...option.ClientOption) (*Client, error)
NewClientWithConfig creates a client to a database. A valid database name has the form projects/PROJECT_ID/instances/INSTANCE_ID/databases/DATABASE_ID.
▹ Example
func (*Client) Apply ¶
func (c *Client) Apply(ctx context.Context, ms []*Mutation, opts ...ApplyOption) (time.Time, error)
Apply applies a list of mutations atomically to the database.
▹ Example
func (*Client) Close ¶
func (c *Client) Close()
Close closes the client.
func (*Client) ReadOnlyTransaction ¶
func (c *Client) ReadOnlyTransaction() *ReadOnlyTransaction
ReadOnlyTransaction returns a ReadOnlyTransaction that can be used for multiple reads from the database. You must call Close() when the ReadOnlyTransaction is no longer needed to release resources on the server.
ReadOnlyTransaction will use a strong TimestampBound by default. Use ReadOnlyTransaction.WithTimestampBound to specify a different TimestampBound. A non-strong bound can be used to reduce latency, or "time-travel" to prior versions of the database, see the documentation of TimestampBound for details.
▹ Example
func (*Client) ReadWriteTransaction ¶
func (c *Client) ReadWriteTransaction(ctx context.Context, f func(context.Context, *ReadWriteTransaction) error) (time.Time, error)
ReadWriteTransaction executes a read-write transaction, with retries as necessary.
The function f will be called one or more times. It must not maintain any state between calls.
If the transaction cannot be committed or if f returns an IsAborted error, ReadWriteTransaction will call f again. It will continue to call f until the transaction can be committed or the Context times out or is cancelled. If f returns an error other than IsAborted, ReadWriteTransaction will abort the transaction and return the error.
To limit the number of retries, set a deadline on the Context rather than using a fixed limit on the number of attempts. ReadWriteTransaction will retry as needed until that deadline is met.
▹ Example
func (*Client) Single ¶
func (c *Client) Single() *ReadOnlyTransaction
Single provides a read-only snapshot transaction optimized for the case where only a single read or query is needed. This is more efficient than using ReadOnlyTransaction() for a single read or query.
Single will use a strong TimestampBound by default. Use ReadOnlyTransaction.WithTimestampBound to specify a different TimestampBound. A non-strong bound can be used to reduce latency, or "time-travel" to prior versions of the database, see the documentation of TimestampBound for details.
▹ Example
type ClientConfig ¶
ClientConfig has configurations for the client.
type ClientConfig struct { // NumChannels is the number of GRPC channels. // If zero, numChannels is used. NumChannels int // SessionPoolConfig is the configuration for session pool. SessionPoolConfig // contains filtered or unexported fields }
type Error ¶
Error is the structured error returned by Cloud Spanner client.
type Error struct { // Code is the canonical error code for describing the nature of a // particular error. Code codes.Code // Desc explains more details of the error. Desc string // contains filtered or unexported fields }
func (*Error) Error ¶
func (e *Error) Error() string
Error implements error.Error.
type GenericColumnValue ¶
GenericColumnValue represents the generic encoded value and type of the column. See google.spanner.v1.ResultSet proto for details. This can be useful for proxying query results when the result types are not known in advance.
type GenericColumnValue struct { Type *sppb.Type Value *proto3.Value }
func (GenericColumnValue) Decode ¶
func (v GenericColumnValue) Decode(ptr interface{}) error
Decode decodes a GenericColumnValue. The ptr argument should be a pointer to a Go value that can accept v.
type Key ¶
A Key can be either a Cloud Spanner row's primary key or a secondary index key. It is essentially an interface{} array, which represents a set of Cloud Spanner columns. A Key type has the following usages:
- Used as primary key which uniquely identifies a Cloud Spanner row. - Used as secondary index key which maps to a set of Cloud Spanner rows indexed under it. - Used as endpoints of primary key/secondary index ranges, see also the KeyRange type.
Rows that are identified by the Key type are outputs of read operation or targets of delete operation in a mutation. Note that for Insert/Update/InsertOrUpdate/Update mutation types, although they don't require a primary key explicitly, the column list provided must contain enough columns that can comprise a primary key.
Keys are easy to construct. For example, suppose you have a table with a primary key of username and product ID. To make a key for this table:
key := spanner.Key{"john", 16}
See the description of Row and Mutation types for how Go types are mapped to Cloud Spanner types. For convenience, Key type supports a wide range of Go types:
- int, int8, int16, int32, int64, and NullInt64 are mapped to Cloud Spanner's INT64 type. - uint8, uint16 and uint32 are also mapped to Cloud Spanner's INT64 type. - float32, float64, NullFloat64 are mapped to Cloud Spanner's FLOAT64 type. - bool and NullBool are mapped to Cloud Spanner's BOOL type. - []byte is mapped to Cloud Spanner's BYTES type. - string and NullString are mapped to Cloud Spanner's STRING type. - time.Time and NullTime are mapped to Cloud Spanner's TIMESTAMP type. - civil.Date and NullDate are mapped to Cloud Spanner's DATE type.
type Key []interface{}
func (Key) AsPrefix ¶
func (k Key) AsPrefix() KeyRange
AsPrefix returns a KeyRange for all keys where k is the prefix.
func (Key) String ¶
func (key Key) String() string
String implements fmt.Stringer for Key. For string, []byte and NullString, it prints the uninterpreted bytes of their contents, leaving caller with the opportunity to escape the output.
type KeyRange ¶
A KeyRange represents a range of rows in a table or index.
A range has a Start key and an End key. IncludeStart and IncludeEnd indicate whether the Start and End keys are included in the range.
For example, consider the following table definition:
CREATE TABLE UserEvents ( UserName STRING(MAX), EventDate STRING(10), ) PRIMARY KEY(UserName, EventDate);
The following keys name rows in this table:
spanner.Key{"Bob", "2014-09-23"} spanner.Key{"Alfred", "2015-06-12"}
Since the UserEvents table's PRIMARY KEY clause names two columns, each UserEvents key has two elements; the first is the UserName, and the second is the EventDate.
Key ranges with multiple components are interpreted lexicographically by component using the table or index key's declared sort order. For example, the following range returns all events for user "Bob" that occurred in the year 2015:
spanner.KeyRange{ Start: spanner.Key{"Bob", "2015-01-01"}, End: spanner.Key{"Bob", "2015-12-31"}, Kind: ClosedClosed, }
Start and end keys can omit trailing key components. This affects the inclusion and exclusion of rows that exactly match the provided key components: if IncludeStart is true, then rows that exactly match the provided components of the Start key are included; if IncludeStart is false then rows that exactly match are not included. IncludeEnd and End key behave in the same fashion.
For example, the following range includes all events for "Bob" that occurred during and after the year 2000:
spanner.KeyRange{ Start: spanner.Key{"Bob", "2000-01-01"}, End: spanner.Key{"Bob"}, Kind: ClosedClosed, }
The next example retrieves all events for "Bob":
spanner.Key{"Bob"}.AsPrefix()
To retrieve events before the year 2000:
spanner.KeyRange{ Start: spanner.Key{"Bob"}, End: spanner.Key{"Bob", "2000-01-01"}, Kind: ClosedOpen, }
Although we specified a Kind for this KeyRange, we didn't need to, because the default is ClosedOpen. In later examples we'll omit Kind if it is ClosedOpen.
The following range includes all rows in a table or under a index:
spanner.AllKeys()
This range returns all users whose UserName begins with any character from A to C:
spanner.KeyRange{ Start: spanner.Key{"A"}, End: spanner.Key{"D"}, }
This range returns all users whose UserName begins with B:
spanner.KeyRange{ Start: spanner.Key{"B"}, End: spanner.Key{"C"}, }
Key ranges honor column sort order. For example, suppose a table is defined as follows:
CREATE TABLE DescendingSortedTable { Key INT64, ... ) PRIMARY KEY(Key DESC);
The following range retrieves all rows with key values between 1 and 100 inclusive:
spanner.KeyRange{ Start: spanner.Key{100}, End: spanner.Key{1}, Kind: ClosedClosed, }
Note that 100 is passed as the start, and 1 is passed as the end, because Key is a descending column in the schema.
type KeyRange struct { // Start specifies the left boundary of the key range; End specifies // the right boundary of the key range. Start, End Key // Kind describes whether the boundaries of the key range include // their keys. Kind KeyRangeKind }
func (KeyRange) String ¶
func (r KeyRange) String() string
String implements fmt.Stringer for KeyRange type.
type KeyRangeKind ¶
KeyRangeKind describes the kind of interval represented by a KeyRange: whether it is open or closed on the left and right.
type KeyRangeKind int
const ( // ClosedOpen is closed on the left and open on the right: the Start // key is included, the End key is excluded. ClosedOpen KeyRangeKind = iota // ClosedClosed is closed on the left and the right: both keys are included. ClosedClosed // OpenClosed is open on the left and closed on the right: the Start // key is excluded, the End key is included. OpenClosed // OpenOpen is open on the left and the right: neither key is included. OpenOpen )
type KeySet ¶
A KeySet defines a collection of Cloud Spanner keys and/or key ranges. All the keys are expected to be in the same table or index. The keys need not be sorted in any particular way.
An individual Key can act as a KeySet, as can a KeyRange. Use the KeySets function to create a KeySet consisting of multiple Keys and KeyRanges. To obtain an empty KeySet, call KeySets with no arguments.
If the same key is specified multiple times in the set (for example if two ranges, two keys, or a key and a range overlap), the Cloud Spanner backend behaves as if the key were only specified once.
type KeySet interface {
// contains filtered or unexported methods
}
func AllKeys ¶
func AllKeys() KeySet
AllKeys returns a KeySet that represents all Keys of a table or a index.
func KeySets ¶
func KeySets(keySets ...KeySet) KeySet
KeySets returns the union of the KeySets. If any of the KeySets is AllKeys, then the resulting KeySet will be equivalent to AllKeys.
type Mutation ¶
A Mutation describes a modification to one or more Cloud Spanner rows. The mutation represents an insert, update, delete, etc on a table.
Many mutations can be applied in a single atomic commit. For purposes of constraint checking (such as foreign key constraints), the operations can be viewed as applying in same order as the mutations are supplied in (so that e.g., a row and its logical "child" can be inserted in the same commit).
- The Apply function applies series of mutations. - A ReadWriteTransaction applies a series of mutations as part of an atomic read-modify-write operation.
Example:
m := spanner.Insert("User", []string{"user_id", "profile"}, []interface{}{UserID, profile}) _, err := client.Apply(ctx, []*spanner.Mutation{m})
In this example, we insert a new row into the User table. The primary key for the new row is UserID (presuming that "user_id" has been declared as the primary key of the "User" table).
Updating a row
Changing the values of columns in an existing row is very similar to inserting a new row:
m := spanner.Update("User", []string{"user_id", "profile"}, []interface{}{UserID, profile}) _, err := client.Apply(ctx, []*spanner.Mutation{m})
Deleting a row
To delete a row, use spanner.Delete:
m := spanner.Delete("User", spanner.Key{UserId}) _, err := client.Apply(ctx, []*spanner.Mutation{m})
spanner.Delete accepts a KeySet, so you can also pass in a KeyRange, or use the spanner.KeySets function to build any combination of Keys and KeyRanges.
Note that deleting a row in a table may also delete rows from other tables if cascading deletes are specified in those tables' schemas. Delete does nothing if the named row does not exist (does not yield an error).
Deleting a field
To delete/clear a field within a row, use spanner.Update with the value nil:
m := spanner.Update("User", []string{"user_id", "profile"}, []interface{}{UserID, nil}) _, err := client.Apply(ctx, []*spanner.Mutation{m})
The valid Go types and their corresponding Cloud Spanner types that can be used in the Insert/Update/InsertOrUpdate functions are:
string, NullString - STRING []string, []NullString - STRING ARRAY []byte - BYTES [][]byte - BYTES ARRAY int, int64, NullInt64 - INT64 []int, []int64, []NullInt64 - INT64 ARRAY bool, NullBool - BOOL []bool, []NullBool - BOOL ARRAY float64, NullFloat64 - FLOAT64 []float64, []NullFloat64 - FLOAT64 ARRAY time.Time, NullTime - TIMESTAMP []time.Time, []NullTime - TIMESTAMP ARRAY Date, NullDate - DATE []Date, []NullDate - DATE ARRAY
To compare two Mutations for testing purposes, use reflect.DeepEqual.
type Mutation struct {
// contains filtered or unexported fields
}
func Delete ¶
func Delete(table string, ks KeySet) *Mutation
Delete removes the rows described by the KeySet from the table. It succeeds whether or not the keys were present.
▹ Example
func Insert ¶
func Insert(table string, cols []string, vals []interface{}) *Mutation
Insert returns a Mutation to insert a row into a table. If the row already exists, the write or transaction fails.
▹ Example
func InsertMap ¶
func InsertMap(table string, in map[string]interface{}) *Mutation
InsertMap returns a Mutation to insert a row into a table, specified by a map of column name to value. If the row already exists, the write or transaction fails.
▹ Example
func InsertOrUpdate ¶
func InsertOrUpdate(table string, cols []string, vals []interface{}) *Mutation
InsertOrUpdate returns a Mutation to insert a row into a table. If the row already exists, it updates it instead. Any column values not explicitly written are preserved.
For a similar example, See Update.
func InsertOrUpdateMap ¶
func InsertOrUpdateMap(table string, in map[string]interface{}) *Mutation
InsertOrUpdateMap returns a Mutation to insert a row into a table, specified by a map of column to value. If the row already exists, it updates it instead. Any column values not explicitly written are preserved.
For a similar example, See UpdateMap.
func InsertOrUpdateStruct ¶
func InsertOrUpdateStruct(table string, in interface{}) (*Mutation, error)
InsertOrUpdateStruct returns a Mutation to insert a row into a table, specified by a Go struct. If the row already exists, it updates it instead. Any column values not explicitly written are preserved.
The in argument must be a struct or a pointer to a struct. Its exported fields specify the column names and values. Use a field tag like "spanner:name" to provide an alternative column name, or use "spanner:-" to ignore the field.
For a similar example, See UpdateStruct.
func InsertStruct ¶
func InsertStruct(table string, in interface{}) (*Mutation, error)
InsertStruct returns a Mutation to insert a row into a table, specified by a Go struct. If the row already exists, the write or transaction fails.
The in argument must be a struct or a pointer to a struct. Its exported fields specify the column names and values. Use a field tag like "spanner:name" to provide an alternative column name, or use "spanner:-" to ignore the field.
▹ Example
func Replace ¶
func Replace(table string, cols []string, vals []interface{}) *Mutation
Replace returns a Mutation to insert a row into a table, deleting any existing row. Unlike InsertOrUpdate, this means any values not explicitly written become NULL.
For a similar example, See Update.
func ReplaceMap ¶
func ReplaceMap(table string, in map[string]interface{}) *Mutation
ReplaceMap returns a Mutation to insert a row into a table, deleting any existing row. Unlike InsertOrUpdateMap, this means any values not explicitly written become NULL. The row is specified by a map of column to value.
For a similar example, See UpdateMap.
func ReplaceStruct ¶
func ReplaceStruct(table string, in interface{}) (*Mutation, error)
ReplaceStruct returns a Mutation to insert a row into a table, deleting any existing row. Unlike InsertOrUpdateMap, this means any values not explicitly written become NULL. The row is specified by a Go struct.
The in argument must be a struct or a pointer to a struct. Its exported fields specify the column names and values. Use a field tag like "spanner:name" to provide an alternative column name, or use "spanner:-" to ignore the field.
For a similar example, See UpdateStruct.
func Update ¶
func Update(table string, cols []string, vals []interface{}) *Mutation
Update returns a Mutation to update a row in a table. If the row does not already exist, the write or transaction fails.
▹ Example
func UpdateMap ¶
func UpdateMap(table string, in map[string]interface{}) *Mutation
UpdateMap returns a Mutation to update a row in a table, specified by a map of column to value. If the row does not already exist, the write or transaction fails.
▹ Example
func UpdateStruct ¶
func UpdateStruct(table string, in interface{}) (*Mutation, error)
UpdateStruct returns a Mutation to update a row in a table, specified by a Go struct. If the row does not already exist, the write or transaction fails.
▹ Example
type NullBool ¶
NullBool represents a Cloud Spanner BOOL that may be NULL.
type NullBool struct {
Bool bool
Valid bool // Valid is true if Bool is not NULL.
}
func (NullBool) String ¶
func (n NullBool) String() string
String implements Stringer.String for NullBool
type NullDate ¶
NullDate represents a Cloud Spanner DATE that may be null.
type NullDate struct {
Date civil.Date
Valid bool // Valid is true if Date is not NULL.
}
func (NullDate) String ¶
func (n NullDate) String() string
String implements Stringer.String for NullDate
type NullFloat64 ¶
NullFloat64 represents a Cloud Spanner FLOAT64 that may be NULL.
type NullFloat64 struct {
Float64 float64
Valid bool // Valid is true if Float64 is not NULL.
}
func (NullFloat64) String ¶
func (n NullFloat64) String() string
String implements Stringer.String for NullFloat64
type NullInt64 ¶
NullInt64 represents a Cloud Spanner INT64 that may be NULL.
type NullInt64 struct {
Int64 int64
Valid bool // Valid is true if Int64 is not NULL.
}
func (NullInt64) String ¶
func (n NullInt64) String() string
String implements Stringer.String for NullInt64
type NullRow ¶
NullRow represents a Cloud Spanner STRUCT that may be NULL. See also the document for Row. Note that NullRow is not a valid Cloud Spanner column Type.
type NullRow struct {
Row Row
Valid bool // Valid is true if Row is not NULL.
}
type NullString ¶
NullString represents a Cloud Spanner STRING that may be NULL.
type NullString struct {
StringVal string
Valid bool // Valid is true if StringVal is not NULL.
}
func (NullString) String ¶
func (n NullString) String() string
String implements Stringer.String for NullString
type NullTime ¶
NullTime represents a Cloud Spanner TIMESTAMP that may be null.
type NullTime struct {
Time time.Time
Valid bool // Valid is true if Time is not NULL.
}
func (NullTime) String ¶
func (n NullTime) String() string
String implements Stringer.String for NullTime
type ReadOnlyTransaction ¶
ReadOnlyTransaction provides a snapshot transaction with guaranteed consistency across reads, but does not allow writes. Read-only transactions can be configured to read at timestamps in the past.
Read-only transactions do not take locks. Instead, they work by choosing a Cloud Spanner timestamp, then executing all reads at that timestamp. Since they do not acquire locks, they do not block concurrent read-write transactions.
Unlike locking read-write transactions, read-only transactions never abort. They can fail if the chosen read timestamp is garbage collected; however, the default garbage collection policy is generous enough that most applications do not need to worry about this in practice. See the documentation of TimestampBound for more details.
A ReadOnlyTransaction consumes resources on the server until Close() is called.
type ReadOnlyTransaction struct {
// contains filtered or unexported fields
}
func (*ReadOnlyTransaction) Close ¶
func (t *ReadOnlyTransaction) Close()
Close closes a ReadOnlyTransaction, the transaction cannot perform any reads after being closed.
func (*ReadOnlyTransaction) Query ¶
func (t *ReadOnlyTransaction) Query(ctx context.Context, statement Statement) *RowIterator
Query executes a query against the database. It returns a RowIterator for retrieving the resulting rows.
func (*ReadOnlyTransaction) Read ¶
func (t *ReadOnlyTransaction) Read(ctx context.Context, table string, keys KeySet, columns []string) *RowIterator
Read returns a RowIterator for reading multiple rows from the database.
func (*ReadOnlyTransaction) ReadRow ¶
func (t *ReadOnlyTransaction) ReadRow(ctx context.Context, table string, key Key, columns []string) (*Row, error)
ReadRow reads a single row from the database.
If no row is present with the given key, then ReadRow returns an error where spanner.ErrCode(err) is codes.NotFound.
func (*ReadOnlyTransaction) ReadUsingIndex ¶
func (t *ReadOnlyTransaction) ReadUsingIndex(ctx context.Context, table, index string, keys KeySet, columns []string) *RowIterator
ReadUsingIndex returns a RowIterator for reading multiple rows from the database using an index.
Currently, this function can only read columns that are part of the index key, part of the primary key, or stored in the index due to a STORING clause in the index definition.
func (*ReadOnlyTransaction) Timestamp ¶
func (t *ReadOnlyTransaction) Timestamp() (time.Time, error)
Timestamp returns the timestamp chosen to perform reads and queries in this transaction. The value can only be read after some read or query has either returned some data or completed without returning any data.
▹ Example
func (*ReadOnlyTransaction) WithTimestampBound ¶
func (t *ReadOnlyTransaction) WithTimestampBound(tb TimestampBound) *ReadOnlyTransaction
WithTimestampBound specifies the TimestampBound to use for read or query. This can only be used before the first read or query is invoked. Note: bounded staleness is not available with general ReadOnlyTransactions; use a single-use ReadOnlyTransaction instead.
The returned value is the ReadOnlyTransaction so calls can be chained.
▹ Example
type ReadWriteTransaction ¶
ReadWriteTransaction provides a locking read-write transaction.
This type of transaction is the only way to write data into Cloud Spanner; (*Client).Apply and (*Client).ApplyAtLeastOnce use transactions internally. These transactions rely on pessimistic locking and, if necessary, two-phase commit. Locking read-write transactions may abort, requiring the application to retry. However, the interface exposed by (*Client).ReadWriteTransaction eliminates the need for applications to write retry loops explicitly.
Locking transactions may be used to atomically read-modify-write data anywhere in a database. This type of transaction is externally consistent.
Clients should attempt to minimize the amount of time a transaction is active. Faster transactions commit with higher probability and cause less contention. Cloud Spanner attempts to keep read locks active as long as the transaction continues to do reads. Long periods of inactivity at the client may cause Cloud Spanner to release a transaction's locks and abort it.
Reads performed within a transaction acquire locks on the data being read. Writes can only be done at commit time, after all reads have been completed. Conceptually, a read-write transaction consists of zero or more reads or SQL queries followed by a commit.
See (*Client).ReadWriteTransaction for an example.
Semantics
Cloud Spanner can commit the transaction if all read locks it acquired are still valid at commit time, and it is able to acquire write locks for all writes. Cloud Spanner can abort the transaction for any reason. If a commit attempt returns ABORTED, Cloud Spanner guarantees that the transaction has not modified any user data in Cloud Spanner.
Unless the transaction commits, Cloud Spanner makes no guarantees about how long the transaction's locks were held for. It is an error to use Cloud Spanner locks for any sort of mutual exclusion other than between Cloud Spanner transactions themselves.
Aborted transactions
Application code does not need to retry explicitly; RunInTransaction will automatically retry a transaction if an attempt results in an abort. The lock priority of a transaction increases after each prior aborted transaction, meaning that the next attempt has a slightly better chance of success than before.
Under some circumstances (e.g., many transactions attempting to modify the same row(s)), a transaction can abort many times in a short period before successfully committing. Thus, it is not a good idea to cap the number of retries a transaction can attempt; instead, it is better to limit the total amount of wall time spent retrying.
Idle transactions
A transaction is considered idle if it has no outstanding reads or SQL queries and has not started a read or SQL query within the last 10 seconds. Idle transactions can be aborted by Cloud Spanner so that they don't hold on to locks indefinitely. In that case, the commit will fail with error ABORTED.
If this behavior is undesirable, periodically executing a simple SQL query in the transaction (e.g., SELECT 1) prevents the transaction from becoming idle.
type ReadWriteTransaction struct {
// contains filtered or unexported fields
}
func (*ReadWriteTransaction) BufferWrite ¶
func (t *ReadWriteTransaction) BufferWrite(ms []*Mutation) error
BufferWrite adds a list of mutations to the set of updates that will be applied when the transaction is committed. It does not actually apply the write until the transaction is committed, so the operation does not block. The effects of the write won't be visible to any reads (including reads done in the same transaction) until the transaction commits.
See the example for Client.ReadWriteTransaction.
func (*ReadWriteTransaction) Query ¶
func (t *ReadWriteTransaction) Query(ctx context.Context, statement Statement) *RowIterator
Query executes a query against the database. It returns a RowIterator for retrieving the resulting rows.
func (*ReadWriteTransaction) Read ¶
func (t *ReadWriteTransaction) Read(ctx context.Context, table string, keys KeySet, columns []string) *RowIterator
Read returns a RowIterator for reading multiple rows from the database.
func (*ReadWriteTransaction) ReadRow ¶
func (t *ReadWriteTransaction) ReadRow(ctx context.Context, table string, key Key, columns []string) (*Row, error)
ReadRow reads a single row from the database.
If no row is present with the given key, then ReadRow returns an error where spanner.ErrCode(err) is codes.NotFound.
func (*ReadWriteTransaction) ReadUsingIndex ¶
func (t *ReadWriteTransaction) ReadUsingIndex(ctx context.Context, table, index string, keys KeySet, columns []string) *RowIterator
ReadUsingIndex returns a RowIterator for reading multiple rows from the database using an index.
Currently, this function can only read columns that are part of the index key, part of the primary key, or stored in the index due to a STORING clause in the index definition.
type Row ¶
A Row is a view of a row of data produced by a Cloud Spanner read.
A row consists of a number of columns; the number depends on the columns used to construct the read.
The column values can be accessed by index, where the indices are with respect to the columns. For instance, if the read specified []string{"photo_id", "caption", "metadata"}, then each row will contain three columns: the 0th column corresponds to "photo_id", the 1st column corresponds to "caption", etc.
Column values are decoded by using one of the Column, ColumnByName, or Columns methods. The valid values passed to these methods depend on the column type. For example:
var photoID int64 err := row.Column(0, &photoID) // Decode column 0 as an integer. var caption string err := row.Column(1, &caption) // Decode column 1 as a string. // The above two operations at once. err := row.Columns(&photoID, &caption)
Supported types and their corresponding Cloud Spanner column type(s) are:
*string(not NULL), *NullString - STRING *[]NullString - STRING ARRAY *[]byte - BYTES *[][]byte - BYTES ARRAY *int64(not NULL), *NullInt64 - INT64 *[]NullInt64 - INT64 ARRAY *bool(not NULL), *NullBool - BOOL *[]NullBool - BOOL ARRAY *float64(not NULL), *NullFloat64 - FLOAT64 *[]NullFloat64 - FLOAT64 ARRAY *time.Time(not NULL), *NullTime - TIMESTAMP *[]NullTime - TIMESTAMP ARRAY *Date(not NULL), *NullDate - DATE *[]NullDate - DATE ARRAY *[]*some_go_struct, *[]NullRow - STRUCT ARRAY *GenericColumnValue - any Cloud Spanner type
For TIMESTAMP columns, returned time.Time object will be in UTC.
To fetch an array of BYTES, pass a *[][]byte. To fetch an array of (sub)rows, pass a *[]spanner.NullRow or a *[]*some_go_struct where some_go_struct holds all information of the subrow, see spannr.Row.ToStruct for the mapping between Cloud Spanner row and Go struct. To fetch an array of other types, pass a *[]spanner.Null* type of the appropriate type. Use *GenericColumnValue when you don't know in advance what column type to expect.
Row decodes the row contents lazily; as a result, each call to a getter has a chance of returning an error.
A column value may be NULL if the corresponding value is not present in Cloud Spanner. The spanner.Null* types (spanner.NullInt64 et al.) allow fetching values that may be null. A NULL BYTES can be fetched into a *[]byte as nil. It is an error to fetch a NULL value into any other type.
type Row struct {
// contains filtered or unexported fields
}
func NewRow ¶
func NewRow(columnNames []string, columnValues []interface{}) (*Row, error)
NewRow returns a Row containing the supplied data. This can be useful for mocking Cloud Spanner Read and Query responses for unit testing.
func (*Row) Column ¶
func (r *Row) Column(i int, ptr interface{}) error
Column fetches the value from the ith column, decoding it into ptr. See the Row documentation for the list of acceptable argument types. see Client.ReadWriteTransaction for an example.
func (*Row) ColumnByName ¶
func (r *Row) ColumnByName(name string, ptr interface{}) error
ColumnByName fetches the value from the named column, decoding it into ptr. See the Row documentation for the list of acceptable argument types.
▹ Example
func (*Row) ColumnIndex ¶
func (r *Row) ColumnIndex(name string) (int, error)
ColumnIndex returns the index of the column with the given name. The comparison is case-sensitive.
▹ Example
func (*Row) ColumnName ¶
func (r *Row) ColumnName(i int) string
ColumnName returns the name of column i, or empty string for invalid column.
▹ Example
func (*Row) ColumnNames ¶
func (r *Row) ColumnNames() []string
ColumnNames returns all column names of the row.
▹ Example
func (*Row) Columns ¶
func (r *Row) Columns(ptrs ...interface{}) error
Columns fetches all the columns in the row at once.
The value of the kth column will be decoded into the kth argument to Columns. See above for the list of acceptable argument types. The number of arguments must be equal to the number of columns. Pass nil to specify that a column should be ignored.
▹ Example
func (*Row) Size ¶
func (r *Row) Size() int
Size is the number of columns in the row.
▹ Example
func (*Row) ToStruct ¶
func (r *Row) ToStruct(p interface{}) error
ToStruct fetches the columns in a row into the fields of a struct. The rules for mapping a row's columns into a struct's exported fields are as the following:
1. If a field has a `spanner: "column_name"` tag, then decode column 'column_name' into the field. A special case is the `spanner: "-"` tag, which instructs ToStruct to ignore the field during decoding. 2. Otherwise, if the name of a field matches the name of a column (ignoring case), decode the column into the field.
The fields of the destination struct can be of any type that is acceptable to (*spanner.Row).Column.
Slice and pointer fields will be set to nil if the source column is NULL, and a non-nil value if the column is not NULL. To decode NULL values of other types, use one of the spanner.Null* as the type of the destination field.
▹ Example
type RowIterator ¶
RowIterator is an iterator over Rows.
type RowIterator struct {
// contains filtered or unexported fields
}
func (*RowIterator) Do ¶
func (r *RowIterator) Do(f func(r *Row) error) error
Do calls the provided function once in sequence for each row in the iteration. If the function returns a non-nil error, Do immediately returns that error.
If there are no rows in the iterator, Do will return nil without calling the provided function.
Do always calls Stop on the iterator.
▹ Example
func (*RowIterator) Next ¶
func (r *RowIterator) Next() (*Row, error)
Next returns the next result. Its second return value is iterator.Done if there are no more results. Once Next returns Done, all subsequent calls will return Done.
▹ Example
func (*RowIterator) Stop ¶
func (r *RowIterator) Stop()
Stop terminates the iteration. It should be called after every iteration.
type SessionPoolConfig ¶
SessionPoolConfig stores configurations of a session pool.
type SessionPoolConfig struct { // MaxOpened is the maximum number of opened sessions that is allowed by the // session pool. Default to NumChannels * 100. MaxOpened uint64 // MinOpened is the minimum number of opened sessions that the session pool // tries to maintain. Session pool won't continue to expire sessions if number // of opened connections drops below MinOpened. However, if session is found // to be broken, it will still be evicted from session pool, therefore it is // posssible that the number of opened sessions drops below MinOpened. MinOpened uint64 // MaxSessionAge is the maximum duration that a session can be reused, zero // means session pool will never expire sessions. MaxSessionAge time.Duration // MaxBurst is the maximum number of concurrent session creation requests. Defaults to 10. MaxBurst uint64 // WriteSessions is the fraction of sessions we try to keep prepared for write. WriteSessions float64 // HealthCheckWorkers is number of workers used by health checker for this pool. HealthCheckWorkers int // HealthCheckInterval is how often the health checker pings a session. HealthCheckInterval time.Duration // contains filtered or unexported fields }
type Statement ¶
A Statement is a SQL query with named parameters.
A parameter placeholder consists of '@' followed by the parameter name. Parameter names consist of any combination of letters, numbers, and underscores. Names may be entirely numeric (e.g., "WHERE m.id = @5"). Parameters may appear anywhere that a literal value is expected. The same parameter name may be used more than once. It is an error to execute a statement with unbound parameters. On the other hand, it is allowable to bind parameter names that are not used.
See the documentation of the Row type for how Go types are mapped to Cloud Spanner types.
type Statement struct { SQL string Params map[string]interface{} }
func NewStatement ¶
func NewStatement(sql string) Statement
NewStatement returns a Statement with the given SQL and an empty Params map.
▹ Example
▹ Example (StructLiteral)
type TimestampBound ¶
TimestampBound defines how Cloud Spanner will choose a timestamp for a single read/query or read-only transaction.
The types of timestamp bound are:
- Strong (the default). - Bounded staleness. - Exact staleness.
If the Cloud Spanner database to be read is geographically distributed, stale read-only transactions can execute more quickly than strong or read-write transactions, because they are able to execute far from the leader replica.
Each type of timestamp bound is discussed in detail below. A TimestampBound can be specified when creating transactions, see the documentation of spanner.Client for an example.
Strong reads
Strong reads are guaranteed to see the effects of all transactions that have committed before the start of the read. Furthermore, all rows yielded by a single read are consistent with each other - if any part of the read observes a transaction, all parts of the read see the transaction.
Strong reads are not repeatable: two consecutive strong read-only transactions might return inconsistent results if there are concurrent writes. If consistency across reads is required, the reads should be executed within a transaction or at an exact read timestamp.
Use StrongRead() to create a bound of this type.
Exact staleness
These timestamp bounds execute reads at a user-specified timestamp. Reads at a timestamp are guaranteed to see a consistent prefix of the global transaction history: they observe modifications done by all transactions with a commit timestamp less than or equal to the read timestamp, and observe none of the modifications done by transactions with a larger commit timestamp. They will block until all conflicting transactions that may be assigned commit timestamps less than or equal to the read timestamp have finished.
The timestamp can either be expressed as an absolute Cloud Spanner commit timestamp or a staleness relative to the current time.
These modes do not require a "negotiation phase" to pick a timestamp. As a result, they execute slightly faster than the equivalent boundedly stale concurrency modes. On the other hand, boundedly stale reads usually return fresher results.
Use ReadTimestamp() and ExactStaleness() to create a bound of this type.
Bounded staleness
Bounded staleness modes allow Cloud Spanner to pick the read timestamp, subject to a user-provided staleness bound. Cloud Spanner chooses the newest timestamp within the staleness bound that allows execution of the reads at the closest available replica without blocking.
All rows yielded are consistent with each other -- if any part of the read observes a transaction, all parts of the read see the transaction. Boundedly stale reads are not repeatable: two stale reads, even if they use the same staleness bound, can execute at different timestamps and thus return inconsistent results.
Boundedly stale reads execute in two phases: the first phase negotiates a timestamp among all replicas needed to serve the read. In the second phase, reads are executed at the negotiated timestamp.
As a result of the two phase execution, bounded staleness reads are usually a little slower than comparable exact staleness reads. However, they are typically able to return fresher results, and are more likely to execute at the closest replica.
Because the timestamp negotiation requires up-front knowledge of which rows will be read, it can only be used with single-use reads and single-use read-only transactions.
Use MinReadTimestamp() and MaxStaleness() to create a bound of this type.
Old read timestamps and garbage collection
Cloud Spanner continuously garbage collects deleted and overwritten data in the background to reclaim storage space. This process is known as "version GC". By default, version GC reclaims versions after they are four hours old. Because of this, Cloud Spanner cannot perform reads at read timestamps more than four hours in the past. This restriction also applies to in-progress reads and/or SQL queries whose timestamp become too old while executing. Reads and SQL queries with too-old read timestamps fail with the error ErrorCode.FAILED_PRECONDITION.
type TimestampBound struct {
// contains filtered or unexported fields
}
func ExactStaleness ¶
func ExactStaleness(d time.Duration) TimestampBound
ExactStaleness returns a TimestampBound that will perform reads and queries at an exact staleness.
func MaxStaleness ¶
func MaxStaleness(d time.Duration) TimestampBound
MaxStaleness returns a TimestampBound that will perform reads and queries at a time chosen to be at most "d" stale.
func MinReadTimestamp ¶
func MinReadTimestamp(t time.Time) TimestampBound
MinReadTimestamp returns a TimestampBound that bound that will perform reads and queries at a time chosen to be at least "t".
func ReadTimestamp ¶
func ReadTimestamp(t time.Time) TimestampBound
ReadTimestamp returns a TimestampBound that will peform reads and queries at the given time.
func StrongRead ¶
func StrongRead() TimestampBound
StrongRead returns a TimestampBound that will perform reads and queries at a timestamp where all previously committed transactions are visible.
func (TimestampBound) String ¶
func (tb TimestampBound) String() string
String implements fmt.Stringer.