Namespace Amazon.DynamoDBv2.Model
Classes
- ArchivalSummary
Contains details of a table archival operation.
- AttributeDefinition
Represents an attribute for describing the key schema for the table and indexes.
- AttributeValue
Represents the data for an attribute.
Each attribute value is described as a name-value pair. The name is the data type, and the value is the data itself.
For more information, see Data Types in the Amazon DynamoDB Developer Guide.
- AttributeValueUpdate
For the
operation, represents the attributes to be modified, the action to perform on each, and the new value for each.UpdateItem
note
You cannot use
to update any primary key attributes. Instead, you will need to delete the item, and then useUpdateItem
to create a new item with new attributes.PutItem
Attribute values cannot be null; string and binary type attributes must have lengths greater than zero; and set type attributes must not be empty. Requests with empty values will be rejected with a
exception.ValidationException
- AutoScalingPolicyDescription
Represents the properties of the scaling policy.
- AutoScalingPolicyUpdate
Represents the auto scaling policy to be modified.
- AutoScalingSettingsDescription
Represents the auto scaling settings for a global table or global secondary index.
- AutoScalingSettingsUpdate
Represents the auto scaling settings to be modified for a global table or global secondary index.
- AutoScalingTargetTrackingScalingPolicyConfigurationDescription
Represents the properties of a target tracking scaling policy.
- AutoScalingTargetTrackingScalingPolicyConfigurationUpdate
Represents the settings of a target tracking scaling policy that will be modified.
- BackupDescription
Contains the description of the backup created for the table.
- BackupDetails
Contains the details of the backup created for the table.
- BackupInUseException
There is another ongoing conflicting backup control plane operation on the table. The backup is either being created, deleted or restored to a table.
- BackupNotFoundException
Backup not found for the given BackupARN.
- BackupSummary
Contains details for the backup.
- BatchExecuteStatementRequest
Container for the parameters to the BatchExecuteStatement operation. This operation allows you to perform batch reads or writes on data stored in DynamoDB, using PartiQL. Each read statement in a
must specify an equality condition on all key attributes. This enforces that eachBatchExecuteStatement
statement in a batch returns at most a single item.SELECT
note
The entire batch must consist of either read statements or write statements, you cannot mix both in one batch.
A HTTP 200 response does not mean that all statements in the BatchExecuteStatement succeeded. Error details for individual statements can be found under the Error field of the
for each statement.BatchStatementResponse
- BatchExecuteStatementResponse
This is the response object from the BatchExecuteStatement operation.
- BatchGetItemRequest
Container for the parameters to the BatchGetItem operation. The
operation returns the attributes of one or more items from one or more tables. You identify requested items by primary key.BatchGetItem
A single operation can retrieve up to 16 MB of data, which can contain as many as 100 items.
returns a partial result if the response size limit is exceeded, the table's provisioned throughput is exceeded, or an internal processing failure occurs. If a partial result is returned, the operation returns a value forBatchGetItem
. You can use this value to retry the operation starting with the next item to get.UnprocessedKeys
If you request more than 100 items,
returns aBatchGetItem
with the message "Too many items requested for the BatchGetItem call."ValidationException
For example, if you ask to retrieve 100 items, but each individual item is 300 KB in size, the system returns 52 items (so as not to exceed the 16 MB limit). It also returns an appropriate
value so you can get the next page of results. If desired, your application can include its own logic to assemble the pages of results into one dataset.UnprocessedKeys
If none of the items can be processed due to insufficient provisioned throughput on all of the tables in the request, then
returns aBatchGetItem
. If at least one of the items is successfully processed, thenProvisionedThroughputExceededException
completes successfully, while returning the keys of the unread items inBatchGetItem
.UnprocessedKeys
If DynamoDB returns any unprocessed items, you should retry the batch operation on those items. However, we strongly recommend that you use an exponential backoff algorithm. If you retry the batch operation immediately, the underlying read or write requests can still fail due to throttling on the individual tables. If you delay the batch operation using exponential backoff, the individual requests in the batch are much more likely to succeed.
For more information, see Batch Operations and Error Handling in the Amazon DynamoDB Developer Guide.
By default,
performs eventually consistent reads on every table in the request. If you want strongly consistent reads instead, you can setBatchGetItem
toConsistentRead
for any or all tables.true
In order to minimize response latency,
retrieves items in parallel.BatchGetItem
When designing your application, keep in mind that DynamoDB does not return items in any particular order. To help parse the response by item, include the primary key values for the items in your request in the
parameter.ProjectionExpression
If a requested item does not exist, it is not returned in the result. Requests for nonexistent items consume the minimum read capacity units according to the type of read. For more information, see Working with Tables in the Amazon DynamoDB Developer Guide.
- BatchGetItemResponse
Represents the output of a
operation.BatchGetItem
- BatchStatementError
An error associated with a statement in a PartiQL batch that was run.
- BatchStatementRequest
A PartiQL batch statement request.
- BatchStatementResponse
A PartiQL batch statement response..
- BatchWriteItemRequest
Container for the parameters to the BatchWriteItem operation. The
operation puts or deletes multiple items in one or more tables. A single call toBatchWriteItem
can transmit up to 16MB of data over the network, consisting of up to 25 item put or delete operations. While individual items can be up to 400 KB once stored, it's important to note that an item's representation might be greater than 400KB while being sent in DynamoDB's JSON format for the API call. For more details on this distinction, see Naming Rules and Data Types.BatchWriteItem
note
cannot update items. To update items, use theBatchWriteItem
action.UpdateItem
The individual
andPutItem
operations specified inDeleteItem
are atomic; howeverBatchWriteItem
as a whole is not. If any requested operations fail because the table's provisioned throughput is exceeded or an internal processing failure occurs, the failed operations are returned in theBatchWriteItem
response parameter. You can investigate and optionally resend the requests. Typically, you would callUnprocessedItems
in a loop. Each iteration would check for unprocessed items and submit a newBatchWriteItem
request with those unprocessed items until all items have been processed.BatchWriteItem
If none of the items can be processed due to insufficient provisioned throughput on all of the tables in the request, then
returns aBatchWriteItem
.ProvisionedThroughputExceededException
If DynamoDB returns any unprocessed items, you should retry the batch operation on those items. However, we strongly recommend that you use an exponential backoff algorithm. If you retry the batch operation immediately, the underlying read or write requests can still fail due to throttling on the individual tables. If you delay the batch operation using exponential backoff, the individual requests in the batch are much more likely to succeed.
For more information, see Batch Operations and Error Handling in the Amazon DynamoDB Developer Guide.
With
, you can efficiently write or delete large amounts of data, such as from Amazon EMR, or copy data from another database into DynamoDB. In order to improve performance with these large-scale operations,BatchWriteItem
does not behave in the same way as individualBatchWriteItem
andPutItem
calls would. For example, you cannot specify conditions on individual put and delete requests, andDeleteItem
does not return deleted items in the response.BatchWriteItem
If you use a programming language that supports concurrency, you can use threads to write items in parallel. Your application must include the necessary logic to manage the threads. With languages that don't support threading, you must update or delete the specified items one at a time. In both situations,
performs the specified put and delete operations in parallel, giving you the power of the thread pool approach without having to introduce complexity into your application.BatchWriteItem
Parallel processing reduces latency, but each specified put and delete request consumes the same number of write capacity units whether it is processed in parallel or not. Delete operations on nonexistent items consume one write capacity unit.
If one or more of the following is true, DynamoDB rejects the entire batch write operation:
One or more tables specified in the
request does not exist.BatchWriteItem
Primary key attributes specified on an item in the request do not match those in the corresponding table's primary key schema.
You try to perform multiple operations on the same item in the same
request. For example, you cannot put and delete the same item in the sameBatchWriteItem
request.BatchWriteItem
Your request contains at least two items with identical hash and range keys (which essentially is two put operations).
There are more than 25 requests in the batch.
Any individual item in a batch exceeds 400 KB.
The total request size exceeds 16 MB.
- BatchWriteItemResponse
Represents the output of a
operation.BatchWriteItem
- BillingModeSummary
Contains the details for the read/write capacity mode.
- CancellationReason
An ordered list of errors for each item in the request which caused the transaction to get cancelled. The values of the list are ordered according to the ordering of the
request parameter. If no error occurred for the associated item an error with a Null code and Null message will be present.TransactWriteItems
- Capacity
Represents the amount of provisioned throughput capacity consumed on a table or an index.
- Condition
Represents the selection criteria for a
orQuery
operation:Scan
For a
operation,Query
is used for specifying theCondition
to use when querying a table or an index. ForKeyConditions
, only the following comparison operators are supported:KeyConditions
EQ | LE | LT | GE | GT | BEGINS_WITH | BETWEEN
is also used in aCondition
, which evaluates the query results and returns only the desired values.QueryFilter
For a
operation,Scan
is used in aCondition
, which evaluates the scan results and returns only the desired values.ScanFilter
- ConditionCheck
Represents a request to perform a check that an item exists or to check the condition of specific attributes of the item.
- ConditionalCheckFailedException
A condition specified in the operation could not be evaluated.
- ConsumedCapacity
The capacity units consumed by an operation. The data returned includes the total provisioned throughput consumed, along with statistics for the table and any indexes involved in the operation.
is only returned if the request asked for it. For more information, see Provisioned Throughput in the Amazon DynamoDB Developer Guide.ConsumedCapacity
- ContinuousBackupsDescription
Represents the continuous backups and point in time recovery settings on the table.
- ContinuousBackupsUnavailableException
Backups have not yet been enabled for this table.
- ContributorInsightsSummary
Represents a Contributor Insights summary entry.
- CreateBackupRequest
Container for the parameters to the CreateBackup operation. Creates a backup for an existing table.
Each time you create an on-demand backup, the entire table data is backed up. There is no limit to the number of on-demand backups that can be taken.
When you create an on-demand backup, a time marker of the request is cataloged, and the backup is created asynchronously, by applying all changes until the time of the request to the last full table snapshot. Backup requests are processed instantaneously and become available for restore within minutes.
You can call
at a maximum rate of 50 times per second.CreateBackup
All backups in DynamoDB work without consuming any provisioned throughput on the table.
If you submit a backup request on 2018-12-14 at 14:25:00, the backup is guaranteed to contain all data committed to the table up to 14:24:00, and data committed after 14:26:00 will not be. The backup might contain data modifications made between 14:24:00 and 14:26:00. On-demand backup does not support causal consistency.
Along with data, the following are also included on the backups:
Global secondary indexes (GSIs)
Local secondary indexes (LSIs)
Streams
Provisioned read and write capacity
- CreateBackupResponse
This is the response object from the CreateBackup operation.
- CreateGlobalSecondaryIndexAction
Represents a new global secondary index to be added to an existing table.
- CreateGlobalTableRequest
Container for the parameters to the CreateGlobalTable operation. Creates a global table from an existing table. A global table creates a replication relationship between two or more DynamoDB tables with the same table name in the provided Regions.
note
This operation only applies to Version 2017.11.29 of global tables.
If you want to add a new replica table to a global table, each of the following conditions must be true:
The table must have the same primary key as all of the other replicas.
The table must have the same name as all of the other replicas.
The table must have DynamoDB Streams enabled, with the stream containing both the new and the old images of the item.
None of the replica tables in the global table can contain any data.
If global secondary indexes are specified, then the following conditions must also be met:
The global secondary indexes must have the same name.
The global secondary indexes must have the same hash key and sort key (if present).
If local secondary indexes are specified, then the following conditions must also be met:
The local secondary indexes must have the same name.
The local secondary indexes must have the same hash key and sort key (if present).
Write capacity settings should be set consistently across your replica tables and secondary indexes. DynamoDB strongly recommends enabling auto scaling to manage the write capacity settings for all of your global tables replicas and indexes.
If you prefer to manage write capacity settings manually, you should provision equal replicated write capacity units to your replica tables. You should also provision equal replicated write capacity units to matching secondary indexes across your global table.
- CreateGlobalTableResponse
This is the response object from the CreateGlobalTable operation.
- CreateReplicaAction
Represents a replica to be added.
- CreateReplicationGroupMemberAction
Represents a replica to be created.
- CreateTableRequest
Container for the parameters to the CreateTable operation. The
operation adds a new table to your account. In an Amazon Web Services account, table names must be unique within each Region. That is, you can have two tables with same name if you create the tables in different Regions.CreateTable
is an asynchronous operation. Upon receiving aCreateTable
request, DynamoDB immediately returns a response with aCreateTable
ofTableStatus
. After the table is created, DynamoDB sets theCREATING
toTableStatus
. You can perform read and write operations only on anACTIVE
table.ACTIVE
You can optionally define secondary indexes on the new table, as part of the
operation. If you want to create multiple tables with secondary indexes on them, you must create the tables sequentially. Only one table with secondary indexes can be in theCreateTable
state at any given time.CREATING
You can use the
action to check the table status.DescribeTable
- CreateTableResponse
Represents the output of a
operation.CreateTable
- CsvOptions
Processing options for the CSV file being imported.
- Delete
Represents a request to perform a
operation.DeleteItem
- DeleteBackupRequest
Container for the parameters to the DeleteBackup operation. Deletes an existing backup of a table.
You can call
at a maximum rate of 10 times per second.DeleteBackup
- DeleteBackupResponse
This is the response object from the DeleteBackup operation.
- DeleteGlobalSecondaryIndexAction
Represents a global secondary index to be deleted from an existing table.
- DeleteItemRequest
Container for the parameters to the DeleteItem operation. Deletes a single item in a table by primary key. You can perform a conditional delete operation that deletes the item if it exists, or if it has an expected attribute value.
In addition to deleting an item, you can also return the item's attribute values in the same operation, using the
parameter.ReturnValues
Unless you specify conditions, the
is an idempotent operation; running it multiple times on the same item or attribute does not result in an error response.DeleteItem
Conditional deletes are useful for deleting items only if specific conditions are met. If those conditions are met, DynamoDB performs the delete. Otherwise, the item is not deleted.
- DeleteItemResponse
Represents the output of a
operation.DeleteItem
- DeleteReplicaAction
Represents a replica to be removed.
- DeleteReplicationGroupMemberAction
Represents a replica to be deleted.
- DeleteRequest
Represents a request to perform a
operation on an item.DeleteItem
- DeleteTableRequest
Container for the parameters to the DeleteTable operation. The
operation deletes a table and all of its items. After aDeleteTable
request, the specified table is in theDeleteTable
state until DynamoDB completes the deletion. If the table is in theDELETING
state, you can delete it. If a table is inACTIVE
orCREATING
states, then DynamoDB returns aUPDATING
. If the specified table does not exist, DynamoDB returns aResourceInUseException
. If table is already in theResourceNotFoundException
state, no error is returned.DELETING
note
DynamoDB might continue to accept data read and write operations, such as
andGetItem
, on a table in thePutItem
state until the table deletion is complete.DELETING
When you delete a table, any indexes on that table are also deleted.
If you have DynamoDB Streams enabled on the table, then the corresponding stream on that table goes into the
state, and the stream is automatically deleted after 24 hours.DISABLED
Use the
action to check the status of the table.DescribeTable
- DeleteTableResponse
Represents the output of a
operation.DeleteTable
- DescribeBackupRequest
Container for the parameters to the DescribeBackup operation. Describes an existing backup of a table.
You can call
at a maximum rate of 10 times per second.DescribeBackup
- DescribeBackupResponse
This is the response object from the DescribeBackup operation.
- DescribeContinuousBackupsRequest
Container for the parameters to the DescribeContinuousBackups operation. Checks the status of continuous backups and point in time recovery on the specified table. Continuous backups are
on all tables at table creation. If point in time recovery is enabled,ENABLED
will be set to ENABLED.PointInTimeRecoveryStatus
After continuous backups and point in time recovery are enabled, you can restore to any point in time within
andEarliestRestorableDateTime
.LatestRestorableDateTime
is typically 5 minutes before the current time. You can restore your table to any point in time during the last 35 days.LatestRestorableDateTime
You can call
at a maximum rate of 10 times per second.DescribeContinuousBackups
- DescribeContinuousBackupsResponse
This is the response object from the DescribeContinuousBackups operation.
- DescribeContributorInsightsRequest
Container for the parameters to the DescribeContributorInsights operation. Returns information about contributor insights, for a given table or global secondary index.
- DescribeContributorInsightsResponse
This is the response object from the DescribeContributorInsights operation.
- DescribeEndpointsRequest
Container for the parameters to the DescribeEndpoints operation. Returns the regional endpoint information.
- DescribeEndpointsResponse
This is the response object from the DescribeEndpoints operation.
- DescribeExportRequest
Container for the parameters to the DescribeExport operation. Describes an existing table export.
- DescribeExportResponse
This is the response object from the DescribeExport operation.
- DescribeGlobalTableRequest
Container for the parameters to the DescribeGlobalTable operation. Returns information about the specified global table.
note
This operation only applies to Version 2017.11.29 of global tables. If you are using global tables Version 2019.11.21 you can use DescribeTable instead.
- DescribeGlobalTableResponse
This is the response object from the DescribeGlobalTable operation.
- DescribeGlobalTableSettingsRequest
Container for the parameters to the DescribeGlobalTableSettings operation. Describes Region-specific settings for a global table.
note
This operation only applies to Version 2017.11.29 of global tables.
- DescribeGlobalTableSettingsResponse
This is the response object from the DescribeGlobalTableSettings operation.
- DescribeImportRequest
Container for the parameters to the DescribeImport operation. Represents the properties of the import.
- DescribeImportResponse
This is the response object from the DescribeImport operation.
- DescribeKinesisStreamingDestinationRequest
Container for the parameters to the DescribeKinesisStreamingDestination operation. Returns information about the status of Kinesis streaming.
- DescribeKinesisStreamingDestinationResponse
This is the response object from the DescribeKinesisStreamingDestination operation.
- DescribeLimitsRequest
Container for the parameters to the DescribeLimits operation. Returns the current provisioned-capacity quotas for your Amazon Web Services account in a Region, both for the Region as a whole and for any one DynamoDB table that you create there.
When you establish an Amazon Web Services account, the account has initial quotas on the maximum read capacity units and write capacity units that you can provision across all of your DynamoDB tables in a given Region. Also, there are per-table quotas that apply when you create a table there. For more information, see Service, Account, and Table Quotas page in the Amazon DynamoDB Developer Guide.
Although you can increase these quotas by filing a case at Amazon Web Services Support Center, obtaining the increase is not instantaneous. The
action lets you write code to compare the capacity you are currently using to those quotas imposed by your account so that you have enough time to apply for an increase before you hit a quota.DescribeLimits
For example, you could use one of the Amazon Web Services SDKs to do the following:
Call
for a particular Region to obtain your current account quotas on provisioned capacity there.DescribeLimits
Create a variable to hold the aggregate read capacity units provisioned for all your tables in that Region, and one to hold the aggregate write capacity units. Zero them both.
Call
to obtain a list of all your DynamoDB tables.ListTables
For each table name listed by
, do the following:ListTables
Call
with the table name.DescribeTable
Use the data returned by
to add the read capacity units and write capacity units provisioned for the table itself to your variables.DescribeTable
If the table has one or more global secondary indexes (GSIs), loop over these GSIs and add their provisioned capacity values to your variables as well.
Report the account quotas for that Region returned by
, along with the total current provisioned capacity levels you have calculated.DescribeLimits
This will let you see whether you are getting close to your account-level quotas.
The per-table quotas apply only when you are creating a new table. They restrict the sum of the provisioned capacity of the new table itself and all its global secondary indexes.
For existing tables and their GSIs, DynamoDB doesn't let you increase provisioned capacity extremely rapidly, but the only quota that applies is that the aggregate provisioned capacity over all your tables and GSIs cannot exceed either of the per-account quotas.
note
should only be called periodically. You can expect throttling errors if you call it more than once in a minute.DescribeLimits
The
Request element has no content.DescribeLimits
- DescribeLimitsResponse
Represents the output of a
operation.DescribeLimits
- DescribeStreamRequest
Container for the parameters to the DescribeStream operation. Returns information about a stream, including the current status of the stream, its Amazon Resource Name (ARN), the composition of its shards, and its corresponding DynamoDB table.
note
You can call
at a maximum rate of 10 times per second.DescribeStream
Each shard in the stream has a
associated with it. If theSequenceNumberRange
has aSequenceNumberRange
but noStartingSequenceNumber
, then the shard is still open (able to receive more stream records). If bothEndingSequenceNumber
andStartingSequenceNumber
are present, then that shard is closed and can no longer receive more data.EndingSequenceNumber
- DescribeStreamResponse
Represents the output of a
operation.DescribeStream
- DescribeTableReplicaAutoScalingRequest
Container for the parameters to the DescribeTableReplicaAutoScaling operation. Describes auto scaling settings across replicas of the global table at once.
note
This operation only applies to Version 2019.11.21 of global tables.
- DescribeTableReplicaAutoScalingResponse
This is the response object from the DescribeTableReplicaAutoScaling operation.
- DescribeTableRequest
Container for the parameters to the DescribeTable operation. Returns information about the table, including the current status of the table, when it was created, the primary key schema, and any indexes on the table.
note
If you issue a
request immediately after aDescribeTable
request, DynamoDB might return aCreateTable
. This is becauseResourceNotFoundException
uses an eventually consistent query, and the metadata for your table might not be available at that moment. Wait for a few seconds, and then try theDescribeTable
request again.DescribeTable
- DescribeTableResponse
Represents the output of a
operation.DescribeTable
- DescribeTimeToLiveRequest
Container for the parameters to the DescribeTimeToLive operation. Gives a description of the Time to Live (TTL) status on the specified table.
- DescribeTimeToLiveResponse
This is the response object from the DescribeTimeToLive operation.
- DisableKinesisStreamingDestinationRequest
Container for the parameters to the DisableKinesisStreamingDestination operation. Stops replication from the DynamoDB table to the Kinesis data stream. This is done without deleting either of the resources.
- DisableKinesisStreamingDestinationResponse
This is the response object from the DisableKinesisStreamingDestination operation.
- DuplicateItemException
There was an attempt to insert an item with the same primary key as an item that already exists in the DynamoDB table.
- DynamoDBv2PaginatorFactory
Paginators for the DynamoDBv2 service
- EnableKinesisStreamingDestinationRequest
Container for the parameters to the EnableKinesisStreamingDestination operation. Starts table data replication to the specified Kinesis data stream at a timestamp chosen during the enable workflow. If this operation doesn't return results immediately, use DescribeKinesisStreamingDestination to check if streaming to the Kinesis data stream is ACTIVE.
- EnableKinesisStreamingDestinationResponse
This is the response object from the EnableKinesisStreamingDestination operation.
- Endpoint
An endpoint information details.
- ExecuteStatementRequest
Container for the parameters to the ExecuteStatement operation. This operation allows you to perform reads and singleton writes on data stored in DynamoDB, using PartiQL.
For PartiQL reads (
statement), if the total number of processed items exceeds the maximum dataset size limit of 1 MB, the read stops and results are returned to the user as aSELECT
value to continue the read in a subsequent operation. If the filter criteria inLastEvaluatedKey
clause does not match any data, the read will return an empty result set.WHERE
A single
statement response can return up to the maximum number of items (if using the Limit parameter) or a maximum of 1 MB of data (and then apply any filtering to the results usingSELECT
clause). IfWHERE
is present in the response, you need to paginate the result set.LastEvaluatedKey
- ExecuteStatementResponse
This is the response object from the ExecuteStatement operation.
- ExecuteTransactionRequest
Container for the parameters to the ExecuteTransaction operation. This operation allows you to perform transactional reads or writes on data stored in DynamoDB, using PartiQL.
note
The entire transaction must consist of either read statements or write statements, you cannot mix both in one transaction. The EXISTS function is an exception and can be used to check the condition of specific attributes of the item in a similar manner to
in the TransactWriteItems API.ConditionCheck
- ExecuteTransactionResponse
This is the response object from the ExecuteTransaction operation.
- ExpectedAttributeValue
Represents a condition to be compared with an attribute value. This condition can be used with
,DeleteItem
, orPutItem
operations; if the comparison evaluates to true, the operation succeeds; if not, the operation fails. You can useUpdateItem
in one of two different ways:ExpectedAttributeValue
Use
to specify one or more values to compare against an attribute. UseAttributeValueList
to specify how you want to perform the comparison. If the comparison evaluates to true, then the conditional operation succeeds.ComparisonOperator
Use
to specify a value that DynamoDB will compare against an attribute. If the values match, thenValue
evaluates to true and the conditional operation succeeds. Optionally, you can also setExpectedAttributeValue
to false, indicating that you do not expect to find the attribute value in the table. In this case, the conditional operation succeeds only if the comparison evaluates to false.Exists
andValue
are incompatible withExists
andAttributeValueList
. Note that if you use both sets of parameters at once, DynamoDB will return aComparisonOperator
exception.ValidationException
- ExpiredIteratorException
The shard iterator has expired and can no longer be used to retrieve stream records. A shard iterator expires 15 minutes after it is retrieved using the
action.GetShardIterator
- ExportConflictException
There was a conflict when writing to the specified S3 bucket.
- ExportDescription
Represents the properties of the exported table.
- ExportNotFoundException
The specified export was not found.
- ExportSummary
Summary information about an export task.
- ExportTableToPointInTimeRequest
Container for the parameters to the ExportTableToPointInTime operation. Exports table data to an S3 bucket. The table must have point in time recovery enabled, and you can export data from any time within the point in time recovery window.
- ExportTableToPointInTimeResponse
This is the response object from the ExportTableToPointInTime operation.
- FailureException
Represents a failure a contributor insights operation.
- Get
Specifies an item and related attribute values to retrieve in a
object.TransactGetItem
- GetItemRequest
Container for the parameters to the GetItem operation. The
operation returns a set of attributes for the item with the given primary key. If there is no matching item,GetItem
does not return any data and there will be noGetItem
element in the response.Item
provides an eventually consistent read by default. If your application requires a strongly consistent read, setGetItem
toConsistentRead
. Although a strongly consistent read might take more time than an eventually consistent read, it always returns the last updated value.true
- GetItemResponse
Represents the output of a
operation.GetItem
- GetRecordsRequest
Container for the parameters to the GetRecords operation. Retrieves the stream records from a given shard.
Specify a shard iterator using the
parameter. The shard iterator specifies the position in the shard from which you want to start reading stream records sequentially. If there are no stream records available in the portion of the shard that the iterator points to,ShardIterator
returns an empty list. Note that it might take multiple calls to get to a portion of the shard that contains stream records.GetRecords
note
can retrieve a maximum of 1 MB of data or 1000 stream records, whichever comes first.GetRecords
- GetRecordsResponse
Represents the output of a
operation.GetRecords
- GetShardIteratorRequest
Container for the parameters to the GetShardIterator operation. Returns a shard iterator. A shard iterator provides information about how to retrieve the stream records from within a shard. Use the shard iterator in a subsequent
request to read the stream records from the shard.GetRecords
note
A shard iterator expires 15 minutes after it is returned to the requester.
- GetShardIteratorResponse
Represents the output of a
operation.GetShardIterator
- GlobalSecondaryIndex
Represents the properties of a global secondary index.
- GlobalSecondaryIndexAutoScalingUpdate
Represents the auto scaling settings of a global secondary index for a global table that will be modified.
- GlobalSecondaryIndexDescription
Represents the properties of a global secondary index.
- GlobalSecondaryIndexInfo
Represents the properties of a global secondary index for the table when the backup was created.
- GlobalSecondaryIndexUpdate
Represents one of the following:
A new global secondary index to be added to an existing table.
New provisioned throughput parameters for an existing global secondary index.
An existing global secondary index to be removed from an existing table.
- GlobalTable
Represents the properties of a global table.
- GlobalTableAlreadyExistsException
The specified global table already exists.
- GlobalTableDescription
Contains details about the global table.
- GlobalTableGlobalSecondaryIndexSettingsUpdate
Represents the settings of a global secondary index for a global table that will be modified.
- GlobalTableNotFoundException
The specified global table does not exist.
- IdempotentParameterMismatchException
DynamoDB rejected the request because you retried a request with a different payload but with an idempotent token that was already used.
- Identity
Contains details about the type of identity that made the request.
- ImportConflictException
There was a conflict when importing from the specified S3 source. This can occur when the current import conflicts with a previous import request that had the same client token.
- ImportNotFoundException
The specified import was not found.
- ImportSummary
Summary information about the source file for the import.
- ImportTableDescription
Represents the properties of the table being imported into.
- ImportTableRequest
Container for the parameters to the ImportTable operation. Imports table data from an S3 bucket.
- ImportTableResponse
This is the response object from the ImportTable operation.
- IndexNotFoundException
The operation tried to access a nonexistent index.
- InputFormatOptions
The format options for the data that was imported into the target table. There is one value, CsvOption.
- InternalServerErrorException
An error occurred on the server side.
- InvalidExportTimeException
The specified
is outside of the point in time recovery window.ExportTime
- InvalidRestoreTimeException
An invalid restore time was specified. RestoreDateTime must be between EarliestRestorableDateTime and LatestRestorableDateTime.
- ItemCollectionMetrics
Information about item collections, if any, that were affected by the operation.
is only returned if the request asked for it. If the table does not have any local secondary indexes, this information is not returned in the response.ItemCollectionMetrics
- ItemCollectionSizeLimitExceededException
An item collection is too large. This exception is only returned for tables that have one or more local secondary indexes.
- ItemResponse
Details for the requested item.
- KeySchemaElement
Represents a single element of a key schema. A key schema specifies the attributes that make up the primary key of a table, or the key attributes of an index.
A
represents exactly one attribute of the primary key. For example, a simple primary key would be represented by oneKeySchemaElement
(for the partition key). A composite primary key would require oneKeySchemaElement
for the partition key, and anotherKeySchemaElement
for the sort key.KeySchemaElement
A
must be a scalar, top-level attribute (not a nested attribute). The data type must be one of String, Number, or Binary. The attribute cannot be nested within a List or a Map.KeySchemaElement
- KeysAndAttributes
Represents a set of primary keys and, for each key, the attributes to retrieve from the table.
For each primary key, you must provide all of the key attributes. For example, with a simple primary key, you only need to provide the partition key. For a composite primary key, you must provide both the partition key and the sort key.
- KinesisDataStreamDestination
Describes a Kinesis data stream destination.
- LimitExceededException
There is no limit to the number of daily on-demand backups that can be taken.
Up to 500 simultaneous table operations are allowed per account. These operations include
,CreateTable
,UpdateTable
,DeleteTable
,UpdateTimeToLive
, andRestoreTableFromBackup
.RestoreTableToPointInTime
The only exception is when you are creating a table with one or more secondary indexes. You can have up to 250 such requests running at a time; however, if the table or index specifications are complex, DynamoDB might temporarily reduce the number of concurrent operations.
There is a soft account quota of 2,500 tables.
- ListBackupsRequest
Container for the parameters to the ListBackups operation. List backups associated with an Amazon Web Services account. To list backups for a given table, specify
.TableName
returns a paginated list of results with at most 1 MB worth of items in a page. You can also specify a maximum number of entries to be returned in a page.ListBackups
In the request, start time is inclusive, but end time is exclusive. Note that these boundaries are for the time at which the original backup was requested.
You can call
a maximum of five times per second.ListBackups
- ListBackupsResponse
This is the response object from the ListBackups operation.
- ListContributorInsightsRequest
Container for the parameters to the ListContributorInsights operation. Returns a list of ContributorInsightsSummary for a table and all its global secondary indexes.
- ListContributorInsightsResponse
This is the response object from the ListContributorInsights operation.
- ListExportsRequest
Container for the parameters to the ListExports operation. Lists completed exports within the past 90 days.
- ListExportsResponse
This is the response object from the ListExports operation.
- ListGlobalTablesRequest
Container for the parameters to the ListGlobalTables operation. Lists all global tables that have a replica in the specified Region.
note
This operation only applies to Version 2017.11.29 of global tables.
- ListGlobalTablesResponse
This is the response object from the ListGlobalTables operation.
- ListImportsRequest
Container for the parameters to the ListImports operation. Lists completed imports within the past 90 days.
- ListImportsResponse
This is the response object from the ListImports operation.
- ListStreamsRequest
Container for the parameters to the ListStreams operation. Returns an array of stream ARNs associated with the current account and endpoint. If the
parameter is present, thenTableName
will return only the streams ARNs for that table.ListStreams
note
You can call
at a maximum rate of 5 times per second.ListStreams
- ListStreamsResponse
Represents the output of a
operation.ListStreams
- ListTablesRequest
Container for the parameters to the ListTables operation. Returns an array of table names associated with the current account and endpoint. The output from
is paginated, with each page returning a maximum of 100 table names.ListTables
- ListTablesResponse
Represents the output of a
operation.ListTables
- ListTagsOfResourceRequest
Container for the parameters to the ListTagsOfResource operation. List all tags on an Amazon DynamoDB resource. You can call ListTagsOfResource up to 10 times per second, per account.
For an overview on tagging DynamoDB resources, see Tagging for DynamoDB in the Amazon DynamoDB Developer Guide.
- ListTagsOfResourceResponse
This is the response object from the ListTagsOfResource operation.
- LocalSecondaryIndex
Represents the properties of a local secondary index.
- LocalSecondaryIndexDescription
Represents the properties of a local secondary index.
- LocalSecondaryIndexInfo
Represents the properties of a local secondary index for the table when the backup was created.
- ParameterizedStatement
Represents a PartiQL statment that uses parameters.
- PointInTimeRecoveryDescription
The description of the point in time settings applied to the table.
- PointInTimeRecoverySpecification
Represents the settings used to enable point in time recovery.
- PointInTimeRecoveryUnavailableException
Point in time recovery has not yet been enabled for this source table.
- Projection
Represents attributes that are copied (projected) from the table into an index. These are in addition to the primary key attributes and index key attributes, which are automatically projected.
- ProvisionedThroughput
Represents the provisioned throughput settings for a specified table or index. The settings can be modified using the
operation.UpdateTable
For current minimum and maximum provisioned throughput values, see Service, Account, and Table Quotas in the Amazon DynamoDB Developer Guide.
- ProvisionedThroughputDescription
Represents the provisioned throughput settings for the table, consisting of read and write capacity units, along with data about increases and decreases.
- ProvisionedThroughputExceededException
Your request rate is too high. The Amazon Web Services SDKs for DynamoDB automatically retry requests that receive this exception. Your request is eventually successful, unless your retry queue is too large to finish. Reduce the frequency of requests and use exponential backoff. For more information, go to Error Retries and Exponential Backoff in the Amazon DynamoDB Developer Guide.
- ProvisionedThroughputOverride
Replica-specific provisioned throughput settings. If not specified, uses the source table's provisioned throughput settings.
- Put
Represents a request to perform a
operation.PutItem
- PutItemRequest
Container for the parameters to the PutItem operation. Creates a new item, or replaces an old item with a new item. If an item that has the same primary key as the new item already exists in the specified table, the new item completely replaces the existing item. You can perform a conditional put operation (add a new item if one with the specified primary key doesn't exist), or replace an existing item if it has certain attribute values. You can return the item's attribute values in the same operation, using the
parameter.ReturnValues
When you add an item, the primary key attributes are the only required attributes. Attribute values cannot be null.
Empty String and Binary attribute values are allowed. Attribute values of type String and Binary must have a length greater than zero if the attribute is used as a key attribute for a table or index. Set type attributes cannot be empty.
Invalid Requests with empty values will be rejected with a
exception.ValidationException
note
To prevent a new item from replacing an existing item, use a conditional expression that contains the
function with the name of the attribute being used as the partition key for the table. Since every record must contain that attribute, theattribute_not_exists
function will only succeed if no matching item exists.attribute_not_exists
For more information about
, see Working with Items in the Amazon DynamoDB Developer Guide.PutItem
- PutItemResponse
Represents the output of a
operation.PutItem
- PutRequest
Represents a request to perform a
operation on an item.PutItem
- QueryRequest
Container for the parameters to the Query operation. You must provide the name of the partition key attribute and a single value for that attribute.
returns all items with that partition key value. Optionally, you can provide a sort key attribute and use a comparison operator to refine the search results.Query
Use the
parameter to provide a specific value for the partition key. TheKeyConditionExpression
operation will return all of the items from the table or index with that partition key value. You can optionally narrow the scope of theQuery
operation by specifying a sort key value and a comparison operator inQuery
. To further refine theKeyConditionExpression
results, you can optionally provide aQuery
. AFilterExpression
determines which items within the results should be returned to you. All of the other results are discarded.FilterExpression
A
operation always returns a result set. If no matching items are found, the result set will be empty. Queries that do not return results consume the minimum number of read capacity units for that type of read operation.Query
note
DynamoDB calculates the number of read capacity units consumed based on item size, not on the amount of data that is returned to an application. The number of capacity units consumed will be the same whether you request all of the attributes (the default behavior) or just some of them (using a projection expression). The number will also be the same whether or not you use a
.FilterExpression
results are always sorted by the sort key value. If the data type of the sort key is Number, the results are returned in numeric order; otherwise, the results are returned in order of UTF-8 bytes. By default, the sort order is ascending. To reverse the order, set theQuery
parameter to false.ScanIndexForward
A single
operation will read up to the maximum number of items set (if using theQuery
parameter) or a maximum of 1 MB of data and then apply any filtering to the results usingLimit
. IfFilterExpression
is present in the response, you will need to paginate the result set. For more information, see Paginating the Results in the Amazon DynamoDB Developer Guide.LastEvaluatedKey
is applied after aFilterExpression
finishes, but before the results are returned. AQuery
cannot contain partition key or sort key attributes. You need to specify those attributes in theFilterExpression
.KeyConditionExpression
note
A
operation can return an empty result set and aQuery
if all the items read for the page of results are filtered out.LastEvaluatedKey
You can query a table, a local secondary index, or a global secondary index. For a query on a table or on a local secondary index, you can set the
parameter toConsistentRead
and obtain a strongly consistent result. Global secondary indexes support eventually consistent reads only, so do not specifytrue
when querying a global secondary index.ConsistentRead
- QueryResponse
Represents the output of a
operation.Query
- Record
A description of a unique event within a stream.
- Replica
Represents the properties of a replica.
- ReplicaAlreadyExistsException
The specified replica is already part of the global table.
- ReplicaAutoScalingDescription
Represents the auto scaling settings of the replica.
- ReplicaAutoScalingUpdate
Represents the auto scaling settings of a replica that will be modified.
- ReplicaDescription
Contains the details of the replica.
- ReplicaGlobalSecondaryIndex
Represents the properties of a replica global secondary index.
- ReplicaGlobalSecondaryIndexAutoScalingDescription
Represents the auto scaling configuration for a replica global secondary index.
- ReplicaGlobalSecondaryIndexAutoScalingUpdate
Represents the auto scaling settings of a global secondary index for a replica that will be modified.
- ReplicaGlobalSecondaryIndexDescription
Represents the properties of a replica global secondary index.
- ReplicaGlobalSecondaryIndexSettingsDescription
Represents the properties of a global secondary index.
- ReplicaGlobalSecondaryIndexSettingsUpdate
Represents the settings of a global secondary index for a global table that will be modified.
- ReplicaNotFoundException
The specified replica is no longer part of the global table.
- ReplicaSettingsDescription
Represents the properties of a replica.
- ReplicaSettingsUpdate
Represents the settings for a global table in a Region that will be modified.
- ReplicaUpdate
Represents one of the following:
A new replica to be added to an existing global table.
New parameters for an existing replica.
An existing replica to be removed from an existing global table.
- ReplicationGroupUpdate
Represents one of the following:
A new replica to be added to an existing regional table or global table. This request invokes the
action in the destination Region.CreateTableReplica
New parameters for an existing replica. This request invokes the
action in the destination Region.UpdateTable
An existing replica to be deleted. The request invokes the
action in the destination Region, deleting the replica and all if its items in the destination Region.DeleteTableReplica
note
When you manually remove a table or global table replica, you do not automatically remove any associated scalable targets, scaling policies, or CloudWatch alarms.
- RequestLimitExceededException
Throughput exceeds the current throughput quota for your account. Please contact Amazon Web Services Support to request a quota increase.
- ResourceInUseException
The operation conflicts with the resource's availability. For example, you attempted to recreate an existing table, or tried to delete a table currently in the
state.CREATING
- ResourceNotFoundException
The operation tried to access a nonexistent table or index. The resource might not be specified correctly, or its status might not be
.ACTIVE
- RestoreSummary
Contains details for the restore.
- RestoreTableFromBackupRequest
Container for the parameters to the RestoreTableFromBackup operation. Creates a new table from an existing backup. Any number of users can execute up to 4 concurrent restores (any type of restore) in a given account.
You can call
at a maximum rate of 10 times per second.RestoreTableFromBackup
You must manually set up the following on the restored table:
Auto scaling policies
IAM policies
Amazon CloudWatch metrics and alarms
Tags
Stream settings
Time to Live (TTL) settings
- RestoreTableFromBackupResponse
This is the response object from the RestoreTableFromBackup operation.
- RestoreTableToPointInTimeRequest
Container for the parameters to the RestoreTableToPointInTime operation. Restores the specified table to the specified point in time within
andEarliestRestorableDateTime
. You can restore your table to any point in time during the last 35 days. Any number of users can execute up to 4 concurrent restores (any type of restore) in a given account.LatestRestorableDateTime
When you restore using point in time recovery, DynamoDB restores your table data to the state based on the selected date and time (day:hour:minute:second) to a new table.
Along with data, the following are also included on the new restored table using point in time recovery:
Global secondary indexes (GSIs)
Local secondary indexes (LSIs)
Provisioned read and write capacity
Encryption settings
All these settings come from the current settings of the source table at the time of restore.
You must manually set up the following on the restored table:
Auto scaling policies
IAM policies
Amazon CloudWatch metrics and alarms
Tags
Stream settings
Time to Live (TTL) settings
Point in time recovery settings
- RestoreTableToPointInTimeResponse
This is the response object from the RestoreTableToPointInTime operation.
- S3BucketSource
The S3 bucket that is being imported from.
- SSEDescription
The description of the server-side encryption status on the specified table.
- SSESpecification
Represents the settings used to enable server-side encryption.
- ScanRequest
Container for the parameters to the Scan operation. The
operation returns one or more items and item attributes by accessing every item in a table or a secondary index. To have DynamoDB return fewer items, you can provide aScan
operation.FilterExpression
If the total number of scanned items exceeds the maximum dataset size limit of 1 MB, the scan stops and results are returned to the user as a
value to continue the scan in a subsequent operation. The results also include the number of items exceeding the limit. A scan can result in no table data meeting the filter criteria.LastEvaluatedKey
A single
operation reads up to the maximum number of items set (if using theScan
parameter) or a maximum of 1 MB of data and then apply any filtering to the results usingLimit
. IfFilterExpression
is present in the response, you need to paginate the result set. For more information, see Paginating the Results in the Amazon DynamoDB Developer Guide.LastEvaluatedKey
operations proceed sequentially; however, for faster performance on a large table or secondary index, applications can request a parallelScan
operation by providing theScan
andSegment
parameters. For more information, see Parallel Scan in the Amazon DynamoDB Developer Guide.TotalSegments
uses eventually consistent reads when accessing the data in a table; therefore, the result set might not include the changes to data in the table immediately before the operation began. If you need a consistent copy of the data, as of the time that theScan
begins, you can set theScan
parameter toConsistentRead
.true
- ScanResponse
Represents the output of a
operation.Scan
- SequenceNumberRange
The beginning and ending sequence numbers for the stream records contained within a shard.
- Shard
A uniquely identified group of stream records within a stream.
- SourceTableDetails
Contains the details of the table when the backup was created.
- SourceTableFeatureDetails
Contains the details of the features enabled on the table when the backup was created. For example, LSIs, GSIs, streams, TTL.
- StreamDescription
Represents all of the data describing a particular stream.
- StreamRecord
A description of a single data modification that was performed on an item in a DynamoDB table.
- StreamSpecification
Represents the DynamoDB Streams configuration for a table in DynamoDB.
- StreamSummary
Represents all of the data describing a particular stream.
- TableAlreadyExistsException
A target table with the specified name already exists.
- TableAutoScalingDescription
Represents the auto scaling configuration for a global table.
- TableClassSummary
Contains details of the table class.
- TableCreationParameters
The parameters for the table created as part of the import operation.
- TableDescription
Represents the properties of a table.
- TableInUseException
A target table with the specified name is either being created or deleted.
- TableNotFoundException
A source table with the name
does not currently exist within the subscriber's account or the subscriber is operating in the wrong Amazon Web Services Region.TableName
- Tag
Describes a tag. A tag is a key-value pair. You can add up to 50 tags to a single DynamoDB table.
Amazon Web Services-assigned tag names and values are automatically assigned the
prefix, which the user cannot assign. Amazon Web Services-assigned tag names do not count towards the tag limit of 50. User-assigned tag names have the prefixaws:
in the Cost Allocation Report. You cannot backdate the application of a tag.user:
For an overview on tagging DynamoDB resources, see Tagging for DynamoDB in the Amazon DynamoDB Developer Guide.
- TagResourceRequest
Container for the parameters to the TagResource operation. Associate a set of tags with an Amazon DynamoDB resource. You can then activate these user-defined tags so that they appear on the Billing and Cost Management console for cost allocation tracking. You can call TagResource up to five times per second, per account.
For an overview on tagging DynamoDB resources, see Tagging for DynamoDB in the Amazon DynamoDB Developer Guide.
- TagResourceResponse
This is the response object from the TagResource operation.
- TimeToLiveDescription
The description of the Time to Live (TTL) status on the specified table.
- TimeToLiveSpecification
Represents the settings used to enable or disable Time to Live (TTL) for the specified table.
- TransactGetItem
Specifies an item to be retrieved as part of the transaction.
- TransactGetItemsRequest
Container for the parameters to the TransactGetItems operation.
is a synchronous operation that atomically retrievesTransactGetItems
multiple items from one or more tables (but not from indexes) in a single account and Region. A
call can contain up to 100TransactGetItems
objects, each of which contains aTransactGetItem
structure that specifies an item to retrieve from a table in the account and Region. A call toGet
cannot retrieve items from tables in more than one Amazon Web Services account or Region. The aggregate size of the items in the transaction cannot exceed 4 MB.TransactGetItems
DynamoDB rejects the entire
request if any of the following is true:TransactGetItems
A conflicting operation is in the process of updating an item to be read.
There is insufficient provisioned capacity for the transaction to be completed.
There is a user error, such as an invalid data format.
The aggregate size of the items in the transaction cannot exceed 4 MB.
- TransactGetItemsResponse
This is the response object from the TransactGetItems operation.
- TransactWriteItem
A list of requests that can perform update, put, delete, or check operations on multiple items in one or more tables atomically.
- TransactWriteItemsRequest
Container for the parameters to the TransactWriteItems operation.
is a synchronous write operation that groups up toTransactWriteItems
100 action requests. These actions can target items in different tables, but not in different Amazon Web Services accounts or Regions, and no two actions can target the same item. For example, you cannot both
andConditionCheck
the same item. The aggregate size of the items in the transaction cannot exceed 4 MB.Update
The actions are completed atomically so that either all of them succeed, or all of them fail. They are defined by the following objects:
— Initiates aPut
operation to write a new item. This structure specifies the primary key of the item to be written, the name of the table to write it in, an optional condition expression that must be satisfied for the write to succeed, a list of the item's attributes, and a field indicating whether to retrieve the item's attributes if the condition is not met.PutItem
— Initiates anUpdate
operation to update an existing item. This structure specifies the primary key of the item to be updated, the name of the table where it resides, an optional condition expression that must be satisfied for the update to succeed, an expression that defines one or more attributes to be updated, and a field indicating whether to retrieve the item's attributes if the condition is not met.UpdateItem
— Initiates aDelete
operation to delete an existing item. This structure specifies the primary key of the item to be deleted, the name of the table where it resides, an optional condition expression that must be satisfied for the deletion to succeed, and a field indicating whether to retrieve the item's attributes if the condition is not met.DeleteItem
— Applies a condition to an item that is not being modified by the transaction. This structure specifies the primary key of the item to be checked, the name of the table where it resides, a condition expression that must be satisfied for the transaction to succeed, and a field indicating whether to retrieve the item's attributes if the condition is not met.ConditionCheck
DynamoDB rejects the entire
request if any of the following is true:TransactWriteItems
A condition in one of the condition expressions is not met.
An ongoing operation is in the process of updating the same item.
There is insufficient provisioned capacity for the transaction to be completed.
An item size becomes too large (bigger than 400 KB), a local secondary index (LSI) becomes too large, or a similar validation error occurs because of changes made by the transaction.
The aggregate size of the items in the transaction exceeds 4 MB.
There is a user error, such as an invalid data format.
- TransactWriteItemsResponse
This is the response object from the TransactWriteItems operation.
- TransactionCanceledException
The entire transaction request was canceled.
DynamoDB cancels a
request under the following circumstances:TransactWriteItems
A condition in one of the condition expressions is not met.
A table in the
request is in a different account or region.TransactWriteItems
More than one action in the
operation targets the same item.TransactWriteItems
There is insufficient provisioned capacity for the transaction to be completed.
An item size becomes too large (larger than 400 KB), or a local secondary index (LSI) becomes too large, or a similar validation error occurs because of changes made by the transaction.
There is a user error, such as an invalid data format.
DynamoDB cancels a
request under the following circumstances:TransactGetItems
There is an ongoing
operation that conflicts with a concurrentTransactGetItems
,PutItem
,UpdateItem
orDeleteItem
request. In this case theTransactWriteItems
operation fails with aTransactGetItems
.TransactionCanceledException
A table in the
request is in a different account or region.TransactGetItems
There is insufficient provisioned capacity for the transaction to be completed.
There is a user error, such as an invalid data format.
note
If using Java, DynamoDB lists the cancellation reasons on the
property. This property is not set for other languages. Transaction cancellation reasons are ordered in the order of requested items, if an item has no error it will haveCancellationReasons
code andNone
message.Null
Cancellation reason codes and possible error messages:
No Errors:
Code:
None
Message:
null
Conditional Check Failed:
Code:
ConditionalCheckFailed
Message: The conditional request failed.
Item Collection Size Limit Exceeded:
Code:
ItemCollectionSizeLimitExceeded
Message: Collection size exceeded.
Transaction Conflict:
Code:
TransactionConflict
Message: Transaction is ongoing for the item.
Provisioned Throughput Exceeded:
Code:
ProvisionedThroughputExceeded
Messages:
The level of configured provisioned throughput for the table was exceeded. Consider increasing your provisioning level with the UpdateTable API.
note
This Message is received when provisioned throughput is exceeded is on a provisioned DynamoDB table.
The level of configured provisioned throughput for one or more global secondary indexes of the table was exceeded. Consider increasing your provisioning level for the under-provisioned global secondary indexes with the UpdateTable API.
note
This message is returned when provisioned throughput is exceeded is on a provisioned GSI.
Throttling Error:
Code:
ThrottlingError
Messages:
Throughput exceeds the current capacity of your table or index. DynamoDB is automatically scaling your table or index so please try again shortly. If exceptions persist, check if you have a hot key: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/bp-partition-key-design.html.
note
This message is returned when writes get throttled on an On-Demand table as DynamoDB is automatically scaling the table.
Throughput exceeds the current capacity for one or more global secondary indexes. DynamoDB is automatically scaling your index so please try again shortly.
note
This message is returned when when writes get throttled on an On-Demand GSI as DynamoDB is automatically scaling the GSI.
Validation Error:
Code:
ValidationError
Messages:
One or more parameter values were invalid.
The update expression attempted to update the secondary index key beyond allowed size limits.
The update expression attempted to update the secondary index key to unsupported type.
An operand in the update expression has an incorrect data type.
Item size to update has exceeded the maximum allowed size.
Number overflow. Attempting to store a number with magnitude larger than supported range.
Type mismatch for attribute to update.
Nesting Levels have exceeded supported limits.
The document path provided in the update expression is invalid for update.
The provided expression refers to an attribute that does not exist in the item.
- TransactionConflictException
Operation was rejected because there is an ongoing transaction for the item.
- TransactionInProgressException
The transaction with the given request token is already in progress.
- TrimmedDataAccessException
The operation attempted to read past the oldest stream record in a shard.
In DynamoDB Streams, there is a 24 hour limit on data retention. Stream records whose age exceeds this limit are subject to removal (trimming) from the stream. You might receive a TrimmedDataAccessException if:
You request a shard iterator with a sequence number older than the trim point (24 hours).
You obtain a shard iterator, but before you use the iterator in a
request, a stream record in the shard exceeds the 24 hour period and is trimmed. This causes the iterator to access a record that no longer exists.GetRecords
- UntagResourceRequest
Container for the parameters to the UntagResource operation. Removes the association of tags from an Amazon DynamoDB resource. You can call
up to five times per second, per account.UntagResource
For an overview on tagging DynamoDB resources, see Tagging for DynamoDB in the Amazon DynamoDB Developer Guide.
- UntagResourceResponse
This is the response object from the UntagResource operation.
- Update
Represents a request to perform an
operation.UpdateItem
- UpdateContinuousBackupsRequest
Container for the parameters to the UpdateContinuousBackups operation.
enables or disables point in time recovery forUpdateContinuousBackups
the specified table. A successful
call returns the currentUpdateContinuousBackups
. Continuous backups areContinuousBackupsDescription
on all tables at table creation. If point in time recovery is enabled,ENABLED
will be set to ENABLED.PointInTimeRecoveryStatus
Once continuous backups and point in time recovery are enabled, you can restore to any point in time within
andEarliestRestorableDateTime
.LatestRestorableDateTime
is typically 5 minutes before the current time. You can restore your table to any point in time during the last 35 days.LatestRestorableDateTime
- UpdateContinuousBackupsResponse
This is the response object from the UpdateContinuousBackups operation.
- UpdateContributorInsightsRequest
Container for the parameters to the UpdateContributorInsights operation. Updates the status for contributor insights for a specific table or index. CloudWatch Contributor Insights for DynamoDB graphs display the partition key and (if applicable) sort key of frequently accessed items and frequently throttled items in plaintext. If you require the use of Amazon Web Services Key Management Service (KMS) to encrypt this table’s partition key and sort key data with an Amazon Web Services managed key or customer managed key, you should not enable CloudWatch Contributor Insights for DynamoDB for this table.
- UpdateContributorInsightsResponse
This is the response object from the UpdateContributorInsights operation.
- UpdateGlobalSecondaryIndexAction
Represents the new provisioned throughput settings to be applied to a global secondary index.
- UpdateGlobalTableRequest
Container for the parameters to the UpdateGlobalTable operation. Adds or removes replicas in the specified global table. The global table must already exist to be able to use this operation. Any replica to be added must be empty, have the same name as the global table, have the same key schema, have DynamoDB Streams enabled, and have the same provisioned and maximum write capacity units.
note
Although you can use
to add replicas and remove replicas in a single request, for simplicity we recommend that you issue separate requests for adding or removing replicas.UpdateGlobalTable
If global secondary indexes are specified, then the following conditions must also be met:
The global secondary indexes must have the same name.
The global secondary indexes must have the same hash key and sort key (if present).
The global secondary indexes must have the same provisioned and maximum write capacity units.
- UpdateGlobalTableResponse
This is the response object from the UpdateGlobalTable operation.
- UpdateGlobalTableSettingsRequest
Container for the parameters to the UpdateGlobalTableSettings operation. Updates settings for a global table.
- UpdateGlobalTableSettingsResponse
This is the response object from the UpdateGlobalTableSettings operation.
- UpdateItemRequest
Container for the parameters to the UpdateItem operation. Edits an existing item's attributes, or adds a new item to the table if it does not already exist. You can put, delete, or add attribute values. You can also perform a conditional update on an existing item (insert a new attribute name-value pair if it doesn't exist, or replace an existing name-value pair if it has certain expected attribute values).
You can also return the item's attribute values in the same
operation using theUpdateItem
parameter.ReturnValues
- UpdateItemResponse
Represents the output of an
operation.UpdateItem
- UpdateReplicationGroupMemberAction
Represents a replica to be modified.
- UpdateTableReplicaAutoScalingRequest
Container for the parameters to the UpdateTableReplicaAutoScaling operation. Updates auto scaling settings on your global tables at once.
note
This operation only applies to Version 2019.11.21 of global tables.
- UpdateTableReplicaAutoScalingResponse
This is the response object from the UpdateTableReplicaAutoScaling operation.
- UpdateTableRequest
Container for the parameters to the UpdateTable operation. Modifies the provisioned throughput settings, global secondary indexes, or DynamoDB Streams settings for a given table.
You can only perform one of the following operations at once:
Modify the provisioned throughput settings of the table.
Remove a global secondary index from the table.
Create a new global secondary index on the table. After the index begins backfilling, you can use
to perform other operations.UpdateTable
is an asynchronous operation; while it is executing, the table status changes fromUpdateTable
toACTIVE
. While it isUPDATING
, you cannot issue anotherUPDATING
request. When the table returns to theUpdateTable
state, theACTIVE
operation is complete.UpdateTable
- UpdateTableResponse
Represents the output of an
operation.UpdateTable
- UpdateTimeToLiveRequest
Container for the parameters to the UpdateTimeToLive operation. The
method enables or disables Time to Live (TTL) for the specified table. A successfulUpdateTimeToLive
call returns the currentUpdateTimeToLive
. It can take up to one hour for the change toTimeToLiveSpecification
fully process. Any additional
calls for the same table during this one hour duration result in aUpdateTimeToLive
.ValidationException
TTL compares the current time in epoch time format to the time stored in the TTL attribute of an item. If the epoch time value stored in the attribute is less than the current time, the item is marked as expired and subsequently deleted.
note
The epoch time format is the number of seconds elapsed since 12:00:00 AM January 1, 1970 UTC.
DynamoDB deletes expired items on a best-effort basis to ensure availability of throughput for other data operations.
DynamoDB typically deletes expired items within two days of expiration. The exact duration within which an item gets deleted after expiration is specific to the nature of the workload. Items that have expired and not been deleted will still show up in reads, queries, and scans.
As items are deleted, they are removed from any local secondary index and global secondary index immediately in the same eventually consistent way as a standard delete operation.
For more information, see Time To Live in the Amazon DynamoDB Developer Guide.
- UpdateTimeToLiveResponse
This is the response object from the UpdateTimeToLive operation.
- WriteRequest
Represents an operation to perform - either
orDeleteItem
. You can only request one of these operations, not both, in a singlePutItem
. If you do need to perform both of these operations, you need to provide two separateWriteRequest
objects.WriteRequest
Interfaces
- IBatchGetItemPaginator
Paginator for the BatchGetItem operation
- IDynamoDBv2PaginatorFactory
Paginators for the DynamoDBv2 service
- IListTablesPaginator
Paginator for the ListTables operation
- IQueryPaginator
Paginator for the Query operation
- IScanPaginator
Paginator for the Scan operation