Martin Heller
Contributor

The best distributed relational databases

feature
Jul 08, 20197 mins
Cloud ComputingDatabasesRelational Databases

These SQL relational databases offer both horizontal scalability and support for ACID transactions—some on a global scale

supercomputer / servers / data center / network
Credit: MaxiPhoto / Getty Images

Relational SQL databases, which have been around since the 1980s, historically ran on mainframes or single servers—that’s all we had. If you wanted the database to handle more data and run faster, you had to put it on a bigger server with more and faster CPUs, memory, and disk. In other words, you turned to vertical scalability or “scale up.” Later on, if you needed the ability to fail-over to improve availability, you could collocate a hot back-up server with the active server in an “active-passive” cluster, typically with shared storage.

The four ACID properties—atomicity, consistency, isolation, and durability—are required to ensure that database transactions will always be valid, even in the event of network partitions, power failures, and other errors. It is relatively easy for a database on a single server to conform to all four ACID properties; it’s a bit harder to implement for a distributed database.

NoSQL databases, introduced around 2009, offered horizontal scalability (meaning that they could run on multiple servers) but often lacked full ACID compliance, and usually did not support the SQL language as such. NoSQL databases introduced the idea of “eventual consistency,” meaning that if you wrote to the database from one server and read it from another right away, you might not see the same results as you would reading it from the server to which you had just written. If you waited long enough, however, the new data would be replicated to all the servers in the cluster and would then be consistent. Eventual consistency is good enough for many applications, such as online catalogs, but not good enough for finance.

Recently, several “scale-out” SQL databases have been introduced that are horizontally scalable. Even better, some of them can handle having geographically distributed servers without sacrificing consistency. Extremely distant server nodes take longer to update than local nodes because of the limit imposed by the speed of light, but several techniques can mitigate that problem, including the use of consensus group quora and very high-speed networking and storage.

In general, the database you have been using and the new distributed database you would like to use should be as compatible as possible to minimize schema and application conversion costs. In the simplest case you can migrate your schema and data and then simply change a connection string in your application. In the most complicated case you’ll need to go through a data conversion process, a complete rewrite of your stored procedures and triggers, and a major rewrite of the data layers of your applications, including your SQL queries.

Amazon RDS and Amazon Aurora

Amazon RDS (Relational Database Service) is a web service that makes it easier to set up, operate, and scale a relational database in the cloud. Amazon RDS supports MySQL, MariaDB, PostgreSQL, Oracle Database, and Microsoft SQL Server.

Amazon RDS databases can be configured for high availability with a synchronous secondary instance for fail-over. Unfortunately, you cannot read from the standby secondary instance. You can use MySQL, MariaDB, or PostgreSQL Read Replicas to increase read scaling, but the replication is asynchronous, so the state of the replica may be behind the state of the primary instance.

Amazon Aurora is a service within Amazon RDS that offers high-performance clusters of MySQL and PostgreSQL databases on fast, distributed storage. You can create up to 15 Aurora Replicas in your database cluster to support read-only queries, and you can create the replicas in multiple availability zones (AZ), allowing for global distribution.

According to Amazon, Aurora can deliver up to five times the throughput of MySQL and up to three times the throughput of PostgreSQL without requiring changes to most of your existing applications. Amazon also claims a lag time of approximately 20 milliseconds for updating Aurora read replicas, which is much faster than MySQL read replicas.

Azure SQL Database

Azure SQL Database is a fully managed relational cloud database service that provides broad SQL Server engine compatibility and allows you to dynamically scale database resources up and down. Azure SQL Database includes the option to create active geo-replicas, which are geographically distributed secondary databases.

Up to four secondaries are supported in the same or different regions, and the secondaries can also be used for read-only queries. If you need to fail-over the primary database to one of the secondaries, you can do so manually or via an API.

ClustrixDB

ClustrixDB, now owned by MariaDB, is a scale-out, clustered relational HTAP (hybrid transaction/analytical processing) database designed with a shared-nothing architecture. ClustrixDB is mostly compatible with MySQL and MariaDB. When I reviewed ClustrixDB, the product lacked support for spatial extension types and full-text search; it still lacked both as of the last release.

Adding a node to ClustrixDB scales both reads and writes. ClustrixDB allows clusters to be deployed across multiple zones to provide fault tolerance during unplanned zone failure. In tests run by an independent lab (but not by InfoWorld) ClustrixDB was able to achieve 40K transactions per second at 15 millisecond latency with a load that was 90% reads and 10% writes, giving it “Cyber Monday” scalability for e-commerce.

CockroachDB

CockroachDB is an open source, horizontally scalable, distributed PostgreSQL-compatible SQL database developed by ex-Googlers who were familiar with Google Cloud Spanner. CockroachDB borrows from Spanner for the design of its data storage system, and it uses a Raft algorithm for reaching a consensus among its nodes. CockroachDB does not need the GPS and atomic clocks that synchronize Spanner.

CockroachDB is built on top of a transactional and consistent key-value store, RocksDB. The primary design goals behind CockroachDB are support for ACID transactions, horizontal scalability, and (most of all) survivability, hence the name. CockroachDB uses serializable isolation mode by default, which is stronger isolation than most other databases implement.

When I reviewed CockroachDB in early 2018, its JOIN performance wasn’t very good. That has since been fixed. CockroachDB supports spreading a cluster over multiple availability zones, and also offers fully managed cloud database clusters on Google Cloud Platform or Amazon Web Services.

Google Cloud Spanner

Google Cloud Spanner is a managed distributed database that has the scalability of NoSQL databases while retaining SQL compatibility, relational schemas, ACID transactions, and external consistency. Spanner comes very close to getting around the CAP Theorem.

Spanner is sharded, globally distributed, and replicated, and uses a Paxos algorithm for reaching a consensus among its nodes. Spanner uses two-phase commits for strong consistency, but considers Paxos groups to be the members of the transaction. Each Paxos group needs only a quorum to be available, not 100 percent of its members.

In internal use at Google, Spanner has shown better than five nines availability, i.e. better than 99.999 percent, which means less than five minutes of downtime a year. That’s good enough that most programmers don’t usually bother to write code to handle Spanner availability failures.

Spanner uses Google Common SQL, a dialect of ANSI 2011 SQL. Common SQL is not exactly the same as any of the SQL dialects used by PostgreSQL, MySQL, SQL Server, or Oracle Database, differing somewhat in data types and dramatically in the area of data manipulation.

Related articles:

Martin Heller
Contributor

Martin Heller is a contributing editor and reviewer for InfoWorld. Formerly a web and Windows programming consultant, he developed databases, software, and websites from his office in Andover, Massachusetts, from 1986 to 2010. More recently, he has served as VP of technology and education at Alpha Software and chairman and CEO at Tubifi.

More from this author