Serialisable Isolation doesn't have to mean slower!

Serialisable Isolation doesn't have to mean slower!

We recently came across a scenario where the assumption was that a current production workload needed to rely on Read Committed isolation to support lower latency requests to and from the database. This assumption albeit it correct made us think.

If you can utilise only Serialisable Isolation due to consistency and transactional accuracy requirements does it mean you can not deliver a low latency experience?

In general, when comparing the Read Committed isolation type against Serialisable Isolation, it's deemed that results from the database would be less favourable.

We discuss in this blog that implementing Serialisable Isolation does not necessarily mean slow.

In serialisable isolation, transactions are completely isolated from each other, which means that they can't see each other's uncommitted changes. This can result in a higher level of consistency and accuracy, but it can also impact performance.

One of the main reasons why serialisable isolation can be less quicker than other isolation levels such as read committed isolation is because it requires more locking and checking of data.

When a transaction is running under serialisable isolation, it needs to acquire locks on all the data it reads and writes, and it needs to check for conflicts with other transactions to ensure that it doesn't violate the isolation guarantees. This extra locking and checking can lead to increased contention and overhead, which can impact performance.

That being said, it is possible to optimise serialisable isolation to improve performance. For example, some databases use multi-version concurrency control (MVCC) to avoid locking and improve concurrency. MVCC allows transactions to read and write data without acquiring locks, which can reduce contention and improve performance.

Ultimately, whether serialisable isolation can be as fast as say read committed isolation depends on the specific implementation and optimisation of the database system. In some cases, it may be possible to achieve similar performance with serialisable isolation, while in others, read committed isolation may be faster.

CockroachDB implements MVCC for increased performance

CockroachDB implements Multi-Version Concurrency Control (MVCC) as its primary concurrency control mechanism. MVCC is a technique used by many modern databases to provide high concurrency and isolation guarantees while minimizing lock contention.

In CockroachDB, each row in a table is represented by a series of versions, with each version having a timestamp indicating when it was created or updated. When a transaction reads a row, it sees the most recent version with a timestamp that is less than or equal to the transaction's read timestamp. When a transaction modifies a row, it creates a new version with a timestamp greater than the transaction's start timestamp.

By using MVCC, CockroachDB can provide snapshot isolation guarantees, meaning that a transaction sees a consistent snapshot of the database at a specific point in time, regardless of concurrent modifications by other transactions. This allows for high concurrency and avoids many of the problems associated with traditional locking-based concurrency control.

However, it's worth noting that even with MVCC, CockroachDB still needs to use locking in some cases to enforce certain constraints, such as foreign key constraints or unique indexes. Additionally, there can still be contention and overhead associated with MVCC in certain scenarios, such as when there are many long-running transactions or when a high degree of serialisation is required.

CockroachDB includes various optimisations to mitigate these issues, but performance can still depend on the specific workload and usage patterns.

Follower reads give us a little bit more oomph!

In a distributed database system like CockroachDB, data is typically replicated across multiple nodes or replicas for redundancy and high availability. By default, read queries are directed to the leader replica, which is responsible for coordinating writes and ensuring consistency across the replicas.

However, in some cases, it may be beneficial to perform read queries on a follower replica instead of the leader. For example, if the leader is experiencing high write traffic, directing read queries to a follower can help alleviate the load on the leader and improve overall system performance. Additionally, follower reads can provide low-latency access to data for read-heavy workloads.

CockroachDB supports follower reads by allowing read-only queries to be directed to a follower replica, rather than the leader. Follower reads are performed with snapshot isolation, meaning that the follower replica provides a snapshot of the database at a particular point in time, and the read-only query operates on that snapshot. Since the follower replica is not responsible for coordinating writes, it can often provide lower latency access to the data.

However, it's important to note that follower reads can return stale data, since the follower replica may not have the most up-to-date view of the database. CockroachDB provides several mechanisms to ensure consistency and avoid stale reads, including lease-based replication, range quorums, and timestamp caching.

Overall, follower reads in CockroachDB can be a useful tool for optimising read performance and improving overall system scalability.

For more reading regarding follower reads please visit this link.

Thank you for reading