AWS Announces Series of New Database Capabilities
November 29, 2018

Amazon Web Services (AWS) announced significant new Amazon Aurora and Amazon DynamoDB capabilities along with two new purpose-built databases.

New Amazon Aurora Global Database offers customers the ability to update a database in a single region and have it automatically replicated to other AWS Regions for higher availability and disaster recovery. Amazon DynamoDB’s new On-Demand feature provides read/write capacity provisioning which removes the need for capacity planning and enables customers to only pay for the read/write requests they consume, while the launch of DynamoDB Transactions enables developers to build transaction guarantees for multi-item updates, making it easier to avoid conflicts and errors when developing highly scalable business critical applications. AWS also announced two new purpose-built database services, Amazon Timestream, a fast, scalable, and fully managed time series database for IoT and operational applications and, Amazon Quantum Ledger Database (QLDB), a highly scalable, immutable, and cryptographically verifiable ledger.

“Hundreds of thousands of customers have embraced AWS’s built-for the-cloud database services because they perform and scale better, are more cost effective, can be easily combined with other AWS services, and offer freedom from restrictive, over-priced, and clunky, old guard database offerings,” said Raju Gulabani, VP, Databases, Analytics, and Machine Learning, AWS. “Today’s announcements make it even easier for AWS customers to scale and operate cloud databases around the world. Whether it is helping to ensure critical workloads are fully available even when disaster strikes, instantly scaling workloads to Internet-scale, maintaining application data consistency, or building new applications for emerging use cases like time series data or ledger systems of record, we are giving customers the features and purpose-built databases they need to support the most mission critical workloads at lower cost, better operational performance, and diminished complexity."

Amazon Aurora MySQL now supports Global Database (available today)

Amazon Aurora, the fastest-growing service in AWS history, is a MySQL and PostgreSQL-compatible relational database built for the cloud and used by tens of thousands of customers around the world. Amazon Aurora Global Database allows customers to update a database in a single AWS Region and automatically replicate it across multiple AWS Regions globally, typically in less than a second. This allows customers to maintain read-only copies of their database for fast data access in local regions by globally distributed applications, or to use a remote region as a backup option in case they need to recover their database quickly for cross-region disaster recovery scenarios.

Amazon DynamoDB introduces On-Demand and Transactions Capabilities (available today)

Amazon DynamoDB is a fully managed, key-value database service that offers reliable performance at any scale. More than a hundred thousand AWS customers use Amazon DynamoDB to deliver consistent, single-digit millisecond latency for some of the world’s largest applications. Many of these customers run large-scale applications that receive irregular and unpredictable data access requests or have new applications for which the usage pattern is unknown. These customers often face a database capacity planning dilemma, having to choose between over-provisioning capacity upfront and paying for resources they will not use, or under-provisioning resources and risking performance problems, and a poor user experience.

For applications with unpredictable, infrequent usage, or spikey usage where capacity planning is difficult, Amazon DynamoDB On-Demand removes the need for capacity planning, by automatically managing the read/write capacity, and customers only pay-per-request for what they actually use. Amazon DynamoDB On-Demand delivers the same single-digit millisecond latency, high availability, and security that customers have come to expect from Amazon DynamoDB.

Amazon DynamoDB powers some of the world’s most high-scale applications that run globally. Sometimes, developers building those applications need support for transactions and have to write custom code for error handling that can be complex, error prone, and time consuming. Amazon DynamoDB Transactions enables developers to build transactions with full atomicity, consistency, isolation, and durability (ACID) guarantees for multi-item updates into their DynamoDB applications, without having to write complex client-side logic to manage conflicts and errors, and without compromising on scale and performance.

Amazon Timestream provides a fast, scalable fully managed time series database (available in preview)

Developers are building IoT and operational applications that need to collect, synthesize, and derive insights from enormous amounts of data that changes over time (known as time-series data). Common examples include DevOps data that measures change in infrastructure metrics over time, IoT sensor data that measures changes in sensor readings over time, and Clickstream data that captures how a user navigates a website over time.

This type of time-series data is generated from multiple sources in extremely high volumes, and needs to be collected in near-real time, in a cost-optimized and highly scalable manner, and customers need a way to store and analyze all this data efficiently. To do this today, customers are using either their existing relational databases or existing commercial time-series databases. Neither of these options are attractive because none have been built from the ground up as time-series databases for the scale needed in the cloud.

Relational databases have rigid schemas that need to be pre-defined and are inflexible if new attributes of an application need to be tracked. They require multiple tables and indexes to be created that lead to complex and inefficient queries as the data grows over time. In addition, they lack the required time series analytical functions such as smoothing, approximation, and interpolation. When you look at existing open source or commercial time series DBs, they are difficult to scale, do not support data retention policies, and require developers to integrate them with separate ingestion, streaming/batching, and visualization software.

To address these challenges, AWS is introducing Amazon Timestream, a purpose-built, fully managed time series database service for collecting, storing, and processing time series data. Amazon Timestream processes trillions of events per day at one-tenth the cost of relational databases, with up to one thousand times faster query performance than a general purpose relational database. Amazon Timestream makes it possible to get single-digit millisecond responsiveness when analyzing time series data from IoT and operational applications. Analytics functions in Amazon Timestream provide smoothing, approximation, and interpolation to help customers identify trends and patterns in real-time data. And, Amazon Timestream is serverless, so it automatically scales up or down to adjust capacity and performance, and customers only pay for what they use.

Amazon QLDB: A high performance, immutable, and cryptographically verifiable ledger database service (available in preview)

Amazon QLDB is a new class of database that provides a transparent, immutable, and cryptographically verifiable ledger that customers can use to build applications that act as a system of record, where multiple parties are transacting within a centralized, trusted entity. Amazon QLDB removes the need to build complex audit functionality into a relational database or rely on the ledger capabilities of a blockchain framework. Amazon QLDB uses an immutable transactional log, known as a journal, which tracks each and every application data change and maintains a complete and verifiable history of changes over time. All transactions must comply with atomicity, consistency, isolation, and durability (ACID) to be logged in the journal, which cannot be deleted or modified. All changes are cryptographically chained and verifiable in a history that customers can analyze using familiar SQL queries. Amazon QLDB is serverless, so customers don’t have to provision capacity or configure read and write limits. They simply create a ledger, define tables, and Amazon QLDB will automatically scale to support application demands, and customers pay only for the reads, writes, and storage they use. And, unlike the ledgers in common blockchain frameworks, Amazon QLDB doesn’t require distributed consensus, so it can execute two to three times as many transactions in the same time as common blockchain frameworks.

Share this

Industry News

November 21, 2024

Red Hat announced the general availability of Red Hat Enterprise Linux 9.5, the latest version of the enterprise Linux platform.

November 21, 2024

Securiti announced a new solution - Security for AI Copilots in SaaS apps.

November 20, 2024

Spectro Cloud completed a $75 million Series C funding round led by Growth Equity at Goldman Sachs Alternatives with participation from existing Spectro Cloud investors.

November 20, 2024

The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, has announced significant momentum around cloud native training and certifications with the addition of three new project-centric certifications and a series of new Platform Engineering-specific certifications:

November 20, 2024

Red Hat announced the latest version of Red Hat OpenShift AI, its artificial intelligence (AI) and machine learning (ML) platform built on Red Hat OpenShift that enables enterprises to create and deliver AI-enabled applications at scale across the hybrid cloud.

November 20, 2024

Salesforce announced agentic lifecycle management tools to automate Agentforce testing, prototype agents in secure Sandbox environments, and transparently manage usage at scale.

November 19, 2024

OpenText™ unveiled Cloud Editions (CE) 24.4, presenting a suite of transformative advancements in Business Cloud, AI, and Technology to empower the future of AI-driven knowledge work.

November 19, 2024

Red Hat announced new capabilities and enhancements for Red Hat Developer Hub, Red Hat’s enterprise-grade developer portal based on the Backstage project.

November 19, 2024

Pegasystems announced the availability of new AI-driven legacy discovery capabilities in Pega GenAI Blueprint™ to accelerate the daunting task of modernizing legacy systems that hold organizations back.

November 19, 2024

Tricentis launched enhanced cloud capabilities for its flagship solution, Tricentis Tosca, bringing enterprise-ready end-to-end test automation to the cloud.

November 19, 2024

Rafay Systems announced new platform advancements that help enterprises and GPU cloud providers deliver developer-friendly consumption workflows for GPU infrastructure.

November 19, 2024

Apiiro introduced Code-to-Runtime, a new capability using Apiiro’s deep code analysis (DCA) technology to map software architecture and trace all types of software components including APIs, open source software (OSS), and containers to code owners while enriching it with business impact.

November 19, 2024

Zesty announced the launch of Kompass, its automated Kubernetes optimization platform.

November 18, 2024

MacStadium announced the launch of Orka Engine, the latest addition to its Orka product line.