AWS Releases Amazon Neptune
May 30, 2018

Amazon Web Services announced general availability of Amazon Neptune, a fast, reliable, and fully managed graph database service.

Amazon Neptune efficiently stores and navigates highly connected data, allowing developers to create sophisticated, interactive graph applications that can query billions of relationships with millisecond latency. In the preview, customers used Neptune to build social networks, recommendation engines, fraud detection, knowledge graphs, drug discovery applications, and more. With Amazon Neptune there are no upfront costs, licenses, or commitments required; customers pay only for the Neptune resources they use.

With Amazon Neptune, developers can query connected datasets with the speed and simplicity of a graph database, while benefiting from the scalability, security, durability, and availability of an AWS managed graph database service. The Amazon Neptune query processing engine is optimized for both of the leading graph models, Property Graph and W3C's RDF, and their associated query languages, Apache TinkerPop Gremlin and RDF/SPARQL, providing customers the flexibility to choose the right approach based on their specific graph use case. And, as a customer’s data increases, Neptune storage scales automatically, without downtime or performance degradation.

“The days of modern technology companies using relational databases for all of their workloads have come and gone,” said Raju Gulabani, VP, Databases, Analytics, and Machine Learning at Amazon Web Services, Inc. “As the world has become more connected, applications that navigate large, connected datasets are increasingly more critical for customers. We are delighted to give customers a high-performance graph database service that enables developers to query billions of relationships in milliseconds using standard APIs, making it easy to build and run applications that work with highly connected datasets.”

Amazon Neptune is highly available and durable, automatically replicating six copies of data across three Availability Zones and continuously backing up data to Amazon Simple Storage Service (Amazon S3). Amazon Neptune is designed to offer greater than 99.99 percent availability and automatically detects and recovers from most database failures in less than 30 seconds. Amazon Neptune also provides advanced security capabilities, including network security through Amazon Virtual Private Cloud (VPC), and encryption at rest using AWS Key Management Service (KMS).

“Amazon Neptune is a key part of the toolkit we use to continually expand Alexa’s knowledge graph for our tens of millions of Alexa customers—it’s just Day 1 and we’re excited to continue our work with the AWS team to deliver even better experiences for our customers,” said David Hardcastle, Director of Amazon Alexa, Amazon.

Amazon Neptune is available in the US East (N. Virginia), US East (Ohio), US West (Oregon), and EU (Ireland) regions, and will expand to additional Regions in the coming year.

The Latest

September 24, 2018

From how applications and infrastructure are developed, configured and built to how they are tested and deployed, pervasive automation is the key to achieving better efficiency and standardization that gives companies the competitive edge. Pervasive automation is the concept of scaling automation broadly and deeply across the entire software delivery lifecycle ...

September 20, 2018

The latest Accelerate State of DevOps Report from DORA focuses on the importance of the database and shows that integrating it into DevOps avoids time-consuming, unprofitable delays that can derail the benefits DevOps otherwise brings. It highlights four key practices that are essential to successful database DevOps ...

September 18, 2018

To celebrate IT Professionals Day 2018 (this year on September 18), the SolarWinds IT Pro Day 2018: A World Powered by Tech Pros survey explores a "Tech PROactive" world where technology professionals have the time, resources, and ability to use their technology prowess to do absolutely anything ...

September 17, 2018

The role of DevOps in capitalizing on the benefits of hybrid cloud has become increasingly important, with developers and IT operations now working together closer than ever to continuously plan, develop, deliver, integrate, test, and deploy new applications and services in the hybrid cloud ...

September 13, 2018

"Our research provides compelling evidence that smart investments in technology, process, and culture drive profit, quality, and customer outcomes that are important for organizations to stay competitive and relevant -- both today and as we look to the future," said Dr. Nicole Forsgren, co-founder and CEO of DevOps Research and Assessment (DORA), referring to the organization's latest report Accelerate: State of DevOps 2018: Strategies for a New Economy ...

September 12, 2018

This next blog examines the security component of step four of the Twelve-Factor methodology — backing services. Here follows some actionable advice from the WhiteHat Security Addendum Checklist, which developers and ops engineers can follow during the SaaS build and operations stages ...

September 10, 2018

When thinking about security automation, a common concern from security teams is that they don't have the coding capabilities needed to create, implement, and maintain it. So, what are teams to do when internal resources are tight and there isn't budget to hire an outside consultant or "unicorn?" ...

September 06, 2018

In evaluating 316 million incidents, it is clear that attacks against the application are growing in volume and sophistication, and as such, continue to be a major threat to business, according to Security Report for Web Applications (Q2 2018) from tCell ...

September 04, 2018

There's a welcome insight in the 2018 Accelerate State of DevOps Report from DORA, because for the first time it calls out database development as a key technical practice which can drive high performance in DevOps ...

August 29, 2018

While everyone is convinced about the benefits of containers, to really know if you're making progress, you need to measure container performance using KPIs.These KPIs should shed light on how a DevOps team is faring in terms of important parameters like speed, quality, availability, and efficiency. Let's look at the specific KPIs to track for each of these broad categories ...

Share this