Multiversion Concurrency Control (MVCC)
 
 

What Is MVCC, Really?

Multiversion Concurrency Control (MVCC) is a method used by databases to manage concurrent access to data. Instead of locking records and forcing transactions to wait for each other, MVCC allows them to operate on independent versions of the same data. Readers see a consistent snapshot of the data as it existed when they began, while writers are free to make changes without disturbing those readers.

Imagine reading a newspaper archive while someone else is editing today’s headline. You’re unaffected — your snapshot of the past remains unchanged, even as the present evolves.

MVCC is fundamental to many modern database systems and OLAP engines. It powers high-concurrency systems with minimal contention and enables snapshot isolation without locking.

 

How MVCC Works: A Deep Dive into the Versioning Mechanism

Multiversion Concurrency Control (MVCC) fundamentally changes how databases manage updates. Rather than updating a record in place — which would risk interfering with concurrent reads or writes — MVCC-based systems create new versions of records whenever changes happen. These versions are stored alongside each other, each tagged with metadata that helps the database know which version to show to whom.

Let’s break this down step by step.

1. Each Update Produces a New Version — Not an Overwrite

In traditional systems, updating a row means overwriting its current value. But in MVCC, the update process creates a new version of the row. The original version is left untouched and still visible to any transactions that began before the update occurred.

So if Transaction A updates a row while Transaction B is reading from it, Transaction B will continue to see the old version — its own snapshot — while Transaction A prepares a new version in isolation.

Example:

Suppose we have a row in a products table:

product_id: 1
price: 100
  • Transaction 10 starts and sees price = 100.

  • Transaction 11 begins later and updates the price to 120.

In MVCC, this update does not overwrite the existing row. Instead:

  • A new version of the row with price = 120 is created.

  • The old version with price = 100 is retained.

  • The two versions now coexist temporarily.

2. Visibility: Who Sees What?

The database needs a way to determine which version of a row each transaction should see. That’s where timestamps or transaction IDs (TXNs) come in.

Each row version typically stores metadata such as:

  • created_by_txn (or xmin in PostgreSQL): the ID of the transaction that created this version.

  • deleted_by_txn (or xmax): the ID of the transaction that marked this version as obsolete (e.g., due to an update or delete).

  • Possibly a pointer to the previous version (used in rollback or undo chains).

Visibility Rule (simplified):
A transaction can see a row version only if:

  • The creating transaction committed before the reading transaction started.

  • The deleting transaction either hasn’t committed yet or started after the reading transaction.

This ensures snapshot isolation — every transaction sees the world as it was at the moment it began, regardless of what other transactions are doing in parallel.

3. Reads Never Block Writes — and Vice Versa

This is one of MVCC’s most important benefits.

  • Readers simply walk through the table, and for each row, decide which version is visible to them based on the snapshot timestamp.

  • Writers don’t wait for readers to finish. They just create a new version and mark the old one as deleted (logically — the physical cleanup happens later).

Because of this design:

  • Readers are never blocked by in-progress updates.

  • Writers don’t need to acquire exclusive row locks that would otherwise block reads.

  • Multiple users can query or update the same data simultaneously with minimal contention.

4. Version Chains and Undo Pointers

Some implementations — like MySQL InnoDB and Oracle — optimize for storage by using undo logs or rollback segments.

Rather than copying the full row for each version (as PostgreSQL does), they:

  • Store the latest committed version in the base table.

  • Chain older versions via a linked undo log.

  • If a transaction needs to see an older version, the engine reconstructs it by walking the undo chain backward.

This reduces write amplification and saves storage, at the cost of some additional logic during reads.

5. Commit and Cleanup

The new row version created by a writer becomes visible only after the transaction commits. If the transaction is rolled back, the new version is discarded, and the old one remains current.

Later, the system performs garbage collection (e.g., PostgreSQL’s VACUUM or MySQL’s purge thread) to remove obsolete versions that are no longer needed by any active transaction.

This deferred cleanup prevents long-running queries from being disrupted but does introduce a trade-off: you have to manage bloat.

Why It Works

MVCC’s versioning mechanism is what enables it to deliver:

  • Non-blocking concurrency

  • Predictable performance under load

  • Stable, consistent reads for long-running queries

  • Minimized deadlocks and contention

The cost is in increased storage, more complex cleanup, and some additional internal bookkeeping. But for many transactional and analytical workloads, the trade-offs are more than worth it.

Let me know if you’d like this with diagrams or want to walk through what this looks like in PostgreSQL or InnoDB with real transaction examples.

 

Comparing MVCC with Locking-Based Concurrency Control

MVCC and locking-based concurrency control represent two fundamentally different philosophies for handling concurrent access to data in a database. To understand their trade-offs, we need to examine how they work, how they impact transaction behavior, and what kind of systems they are best suited for.

Philosophy: Isolation by Versioning vs. Isolation by Exclusion

  • MVCC provides isolation by letting each transaction see its own versioned view of the data. Updates don't block reads, and vice versa, because every transaction operates on a snapshot.

  • Locking-based systems ensure isolation by preventing access. If one transaction reads or writes a row, others must wait until the lock is released. This avoids anomalies but can cause contention and delay.

Think of MVCC like handing every user a private, timestamped copy of a ledger — while locking is like making everyone queue up to use the same physical notebook.

Data Reads

Aspect MVCC Locking
Blocking Reads never block writes. Reads see a stable snapshot of the data as of their transaction start time. Reads can block or be blocked by writes, depending on isolation level and lock granularity.
Read Consistency Strong — guaranteed consistent snapshot view of the database (typically snapshot isolation). Depends on isolation level: dirty reads are possible at lower levels; repeatable reads require more locking.
Impact of Long Reads Safe — even long-running reads don't block others or get blocked. Dangerous — long-running reads may hold locks and block writers, or suffer from inconsistent reads if locks aren’t used.

In practice, MVCC excels in analytical workloads or applications that require long-running reads (like reporting or dashboards), since readers don't interfere with writers.

Data Writes

Aspect MVCC Locking
Update Behavior Creates a new version of the row. Old version stays until obsolete. Updates row in place, typically requiring exclusive locks.
Write Blocking Writers don’t block readers. Only serialization conflicts (e.g. write-write) are blocked. Writers must acquire exclusive locks, potentially blocking readers and other writers.
Conflict Resolution At commit time: if two writers update the same row, one fails (write skew or serialization failure). Handled via lock contention; the second writer waits until the lock is released.

In MVCC systems, conflict detection is deferred — you find out whether your write conflicts only when you try to commit. This can be more efficient but also more complex to manage, especially under high contention.

Concurrency and Throughput

Aspect MVCC Locking
Concurrency High — transactions operate on independent versions. Moderate — concurrency is constrained by lock contention.
Deadlocks Rare — because reads don’t acquire locks, deadlock graphs are smaller. Common — especially with complex, multi-statement transactions and poor access patterns.
Performance Under Load Predictable — especially for mixed read-write workloads. Degrades under contention — lock waits and timeouts increase.

This is one of MVCC’s biggest wins. In high-concurrency environments (e.g., OLTP systems, SaaS apps), MVCC enables smoother scaling without increasing transaction wait times.

Implementation and Storage Trade-offs

Aspect MVCC Locking
Storage Cost High — multiple versions of rows must be stored until cleaned up. Low — only one version of a row exists at any time.
Cleanup Complexity Requires background garbage collection (e.g., VACUUM in PostgreSQL). Simpler — nothing to clean up beyond lock structures.
Implementation Complexity High — visibility logic, version tracking, and cleanup processes are non-trivial. Simpler — concurrency logic handled through standard lock management.

MVCC systems must balance storage pressure and read performance. If old versions accumulate too quickly (due to long transactions or slow cleanup), performance can suffer — especially in write-heavy systems.

Anomalies and Isolation Levels

Let’s look at how both approaches relate to isolation levels defined in the SQL standard:

Isolation Level MVCC Locking
Read Uncommitted Typically unsupported — MVCC avoids exposing uncommitted data. Supported — readers can see dirty data.
Read Committed Reads only data committed before the statement. Each statement gets a new snapshot. Uses shared/exclusive locks to ensure committed reads.
Repeatable Read / Snapshot Isolation Supported natively — a single consistent snapshot for entire transaction. Requires holding shared locks for all reads until commit.
Serializable Emulated using Serializable Snapshot Isolation (SSI) or extra validation. Uses strict 2PL (two-phase locking) — all locks held until commit.

MVCC is generally safer out of the box at moderate isolation levels, offering better consistency guarantees without as much locking overhead. But full serializability still requires additional coordination in MVCC systems.

Real-World Examples

  • PostgreSQL: MVCC implementation with full-tuple versioning; uses xmin/xmax transaction IDs per row; relies on VACUUM for cleanup.

  • MySQL (InnoDB): MVCC with undo logs; uses a single base row and applies undo segments to reconstruct old versions.

  • Oracle: MVCC with rollback segments; similar to undo logs but optimized for long transaction history.

  • SQL Server (default mode): Uses locking; optional snapshot isolation via READ_COMMITTED_SNAPSHOT and version store.

When Should You Use MVCC or Locking?

Use MVCC when:

  • Your application is read-heavy or mixed read-write.

  • You need high concurrency (e.g., real-time dashboards, SaaS apps).

  • You want stable performance under load.

  • You can manage or tolerate the extra storage usage.

Use Locking when:

  • You require strict immediate consistency across transactions.

  • Your workload is simple, with few concurrent users or mostly batch updates.

  • You want to minimize storage overhead and system complexity.

  • You’re already using a DBMS that doesn’t support MVCC natively.

MVCC Isn’t a Silver Bullet, But It’s a Leap Forward

MVCC trades temporal space (multiple row versions) for transactional independence. It decouples reads from writes in a way that’s elegant and scalable — especially for modern, concurrent workloads. But it also adds complexity under the hood, and like any tool, it has to be tuned and monitored.

In short: if you’re building for high concurrency, low latency, and snapshot consistency, MVCC will usually be the better tool — but it demands that you understand how it works under the hood, especially in edge cases.

 

Challenges and Limitations of MVCC

Multiversion Concurrency Control (MVCC) is incredibly powerful, but it’s not a free lunch. The benefits of non-blocking reads and snapshot isolation come with trade-offs — in storage, system complexity, write efficiency, and maintenance overhead.

Let’s explore these in detail.

1. Storage Overhead from Multiple Versions

What happens:
Every time a row is updated or deleted in an MVCC system, a new version is created, and the old version is preserved — at least temporarily — for active transactions that may still need to read it.

Why it's a problem:
This leads to data bloat. For write-heavy systems or those with long-running transactions, old versions pile up and consume valuable disk space.

Examples:

  • PostgreSQL stores full copies of modified rows. A single update on a 1 KB row results in two versions: the old 1 KB tuple and a new 1 KB one. Multiple updates quickly multiply this footprint.

  • In systems like Oracle or MySQL (InnoDB), undo logs or rollback segments can also grow large and need periodic purging.

Implications:

  • Increased I/O and slower queries due to bloated table size.

  • Indexes may also grow unnecessarily, impacting performance.

Mitigations:

  • Regular vacuuming or cleanup (PostgreSQL VACUUM, InnoDB purge thread).

  • Monitoring transaction age and long-lived connections.

  • Archiving or partitioning for high-churn tables.

2. Complexity in Garbage Collection (Version Cleanup)

What happens:
Obsolete versions must be cleaned up — but only when it’s safe to do so. That is, when no active transaction might still need to see them.

Why it's a problem:
MVCC systems must track the oldest active transaction and determine which versions are still visible to any in-flight sessions. This tracking and cleanup can be difficult to tune and resource-intensive.

Examples:

  • PostgreSQL uses VACUUM and autovacuum, which periodically scan tables and indexes to reclaim space. But if VACUUM falls behind (e.g., during high write loads), bloat can spiral out of control.

  • InnoDB (MySQL) uses a background purge thread to clean undo logs, but this too can fall behind under sustained writes.

Implications:

  • Tables and indexes become bloated.

  • Query plans degrade because outdated statistics and increased scan volume.

  • Disk usage spikes and maintenance windows become necessary.

Mitigations:

  • Proper autovacuum tuning (e.g., more aggressive thresholds).

  • Avoiding long-lived transactions (especially idle ones in connection pools).

  • Manually triggering cleanup for critical tables during off-peak hours.

3. Write Amplification and I/O Load

What happens:
In MVCC, a simple UPDATE turns into a series of write operations:

  • Create a new version of the row.

  • Mark the old version as deleted.

  • Update all relevant indexes.

  • Possibly trigger background cleanup.

Why it's a problem:
This increases disk I/O per logical operation. What looks like one write at the SQL level may translate into multiple physical writes — known as write amplification.

Examples:

  • PostgreSQL must write a new tuple and update all affected indexes. If VACUUM hasn't reclaimed old space yet, it can't reuse it.

  • In high-churn OLTP systems, this can stress the disk subsystem.

Implications:

  • Increased latency for write-heavy workloads.

  • SSD wear-out in cloud systems (especially under large-scale updates).

  • Resource contention with background cleanup threads.

Mitigations:

  • Batch writes and minimize unnecessary updates.

  • Use partial updates when possible (e.g., only touch changed columns).

  • Monitor disk I/O and tune WAL settings (write-ahead logging) appropriately.

4. Long-Running Transactions Keep Old Versions Alive

What happens:
Transactions that run for a long time — especially idle ones — prevent the system from discarding old row versions, since those versions might still be visible to them.

Why it's a problem:
This causes version retention, leading to table bloat and increased memory usage.

Examples:

  • A reporting job that runs for 30 minutes may cause PostgreSQL to retain all modified row versions created during that time, even if hundreds of other transactions have since committed.

Implications:

  • Higher disk usage.

  • Slower queries as bloated pages are scanned.

  • Memory pressure if undo logs accumulate in RAM (e.g., in Oracle or InnoDB).

Mitigations:

  • Avoid long-running or idle-in-transaction sessions.

  • For reporting, use dedicated read replicas or data snapshots.

  • Configure timeouts for idle transactions in application-level connection pools.

5. Non-Serializable by Default

What happens:
MVCC guarantees snapshot isolation, which prevents most concurrency anomalies — but not all.

Why it's a problem:
Some systems assume MVCC gives them serializability (the strongest isolation level), but this isn't true by default. It still allows anomalies like write skew.

Examples:

  • In a banking app, two concurrent transactions may both see that a user has $100 in two accounts and transfer $100 from each — resulting in an overdraft.

Implications:

  • Potential for subtle data consistency bugs in business logic.

  • Hard-to-detect anomalies under concurrency.

Mitigations:

  • Use true SERIALIZABLE isolation when needed (e.g., PostgreSQL’s SSI mode).

  • Design application logic with concurrency in mind.

  • Audit critical paths for isolation-level assumptions.

6. Implementation Complexity and Debugging Overhead

What happens:
The internals of MVCC are complex: version chains, transaction ID visibility checks, cleanup heuristics, and concurrency conflict resolution.

Why it's a problem:
This increases the cognitive load for DBAs and engineers. Understanding behavior under load or during failure scenarios (e.g., crash recovery) is harder than in simple locking systems.

Examples:

  • MVCC visibility logic depends on multiple layers: transaction snapshots, commit logs, version metadata, background workers.

  • Debugging phantom data, cleanup delays, or odd visibility anomalies often requires deep inspection of system tables (e.g., pg_stat_activity, txid_current()).

Implications:

  • Steeper learning curve for operations teams.

  • More nuanced tuning and monitoring.

  • Risk of misconfiguration (e.g., aggressive autovacuum thresholds or undo retention).

Mitigations:

  • Use observability tools and dashboards for tracking transaction age and version churn.

  • Train teams on transaction lifecycle and MVCC behavior.

  • Keep critical logic under well-tested isolation levels.

7. Transaction Conflicts at Commit Time

What happens:
In MVCC, conflicting writes may not be detected immediately — especially under snapshot isolation. Instead, conflicts are checked at commit time.

Why it's a problem:
This leads to failed commits after the application has already done substantial work, which must then be retried or rolled back.

Examples:

  • Two users update the same record in parallel. Each creates its own version, and only one will succeed at commit; the other must retry.

Implications:

  • Increased retry logic in application code.

  • Higher perceived latency under contention.

  • Unpredictable throughput under hot-spot updates.

Mitigations:

  • Design for optimistic concurrency (e.g., retry-on-conflict patterns).

  • Reduce contention on hot rows (e.g., through sharding or bucketing).

  • Use SELECT ... FOR UPDATE to avoid write-write conflicts when necessary.

MVCC Is Powerful, But Requires Care

Challenge Why It Matters
Storage overhead Increased disk usage and index size.
Cleanup complexity Delays in reclaiming space can hurt performance.
Write amplification More I/O per transaction, especially under high load.
Long-running transactions Prevent cleanup, inflate undo/version chains.
Not serializable by default Subtle concurrency anomalies possible.
Complexity in tuning/debugging Harder to trace issues without deep system knowledge.
Commit-time conflicts Failures may occur late, requiring careful retries.

MVCC systems — especially in PostgreSQL, InnoDB, Oracle — are robust and production-ready, but they perform best when you understand their internals and plan accordingly.

 

When to Use MVCC — and When You Might Not Want To

MVCC shines in many modern data environments, but it’s not a one-size-fits-all solution. Its value lies in maximizing concurrency, minimizing blocking, and supporting snapshot-consistent reads. However, it introduces storage and complexity trade-offs that make it less suitable for certain use cases.

Below is a detailed breakdown of when MVCC is the right choice, and when other concurrency control models — like locking — may be preferable.

MVCC Is Ideal When...

 

You're operating in a high-concurrency environment with both reads and writes.

MVCC is purpose-built for environments where multiple transactions must access and modify data at the same time without stepping on each other. The fundamental MVCC promise — “reads don’t block writes and writes don’t block reads” — allows for smooth concurrent access.

Examples:

  • Online transactional systems (OLTP) such as e-commerce platforms, fintech apps, SaaS services.

  • User-facing applications where performance degradation due to locking is unacceptable.

Why MVCC fits:

  • Readers see a consistent snapshot even during high write churn.

  • Writers can proceed without being stalled by read locks.

  • Throughput remains high as transactions don’t queue behind each other.

You want predictable performance for long-running read queries.

MVCC guarantees that reads operate on a consistent snapshot, even if the underlying data is being modified by other transactions. This is particularly useful when read queries take longer to complete, such as in analytical dashboards or ad hoc data exploration.

Examples:

  • Business intelligence dashboards refreshing every few seconds.

  • Reporting queries that aggregate over large time windows.

  • Data validation tools scanning billions of rows (e.g., as seen in Airtable’s StarRocks case).

Why MVCC fits:

  • Long reads don’t block or get blocked by writes.

  • The snapshot remains consistent throughout the query’s execution.

  • Predictable behavior regardless of concurrent load.

You require snapshot isolation for correctness.

In many business domains, ensuring repeatable and consistent reads during a transaction is critical to avoid incorrect results, even if strict serializability isn’t necessary.

Examples:

  • Audit log queries that must report a consistent view of historical data.

  • Financial systems computing balances or aggregations based on multiple rows.

  • ETL pipelines reading from production databases where partial updates could corrupt data.

Why MVCC fits:

  • Snapshot isolation ensures that the transaction sees a frozen view of the database.

  • No “dirty reads” or “non-repeatable reads” unless explicitly downgraded.

  • It avoids many common anomalies seen in lower isolation levels without requiring full locking.

You want to avoid deadlocks and reduce operational complexity under load.

Traditional locking-based systems are prone to deadlocks when transactions access data in overlapping but inconsistent orders. MVCC reduces this risk by ensuring that readers don’t acquire shared locks, making the locking graph smaller and less tangled.

Examples:

  • Multi-user applications performing overlapping updates on shared rows.

  • Microservices with asynchronous access patterns (e.g., background updates + user queries).

  • Complex transactional workflows (e.g., order pipelines, data syncs).

Why MVCC fits:

  • The reduced use of locks simplifies transaction coordination.

  • Deadlocks are less common, and often limited to write-write conflicts only.

  • Applications require fewer retries and experience fewer transaction rollbacks.

MVCC Might Be Less Suitable When...

 

Your workload is mostly batch, low-concurrency, or single-writer.

MVCC’s strengths in handling concurrency offer little benefit in sequential or batch-only workloads. In such cases, the overhead of maintaining multiple row versions and performing cleanup outweighs its concurrency benefits.

Examples:

  • Nightly ETL jobs that truncate and reload entire tables.

  • Offline aggregation pipelines in data warehouses.

  • Reporting systems that query read-only snapshots.

Why MVCC may be overkill:

  • Locks don’t cause contention if only one writer is active.

  • MVCC adds unnecessary version tracking and cleanup workload.

  • Simple locking or append-only models are easier to maintain.

You require strict, immediate consistency across transactions.

MVCC — especially at the default snapshot isolation level — does not provide full serializability. This means that some anomalies like write skew or phantom reads can occur unless you explicitly enable a higher isolation level.

Examples:

  • Banking systems enforcing balance constraints across multiple rows.

  • Inventory management systems preventing double booking.

  • Applications relying on external locks or custom validation logic.

Why MVCC might not be enough:

  • Snapshot isolation may allow “non-conflicting” concurrent transactions to violate business invariants.

  • True SERIALIZABLE isolation (e.g., PostgreSQL's SSI) adds overhead and complexity.

  • Locking with strict two-phase locking (2PL) may be more intuitive for correctness.

Storage costs are tightly constrained, and updates are very frequent.

Every update in MVCC creates a new version. If you’re operating in a write-heavy environment where space is limited or expensive (e.g., on SSDs or cloud-based block storage), the storage overhead becomes a concern.

Examples:

  • High-frequency trading systems logging thousands of updates per second.

  • IoT ingestion pipelines with fine-grained updates to timestamped rows.

  • Multi-tenant SaaS platforms storing short-lived records per user action.

Why MVCC might not fit:

  • Multiple row versions multiply disk usage.

  • Aggressive cleanup mechanisms (like PostgreSQL’s VACUUM) can’t always keep up.

  • Systems optimized for in-place updates with locking may be more space-efficient.

Summary Decision Table

Scenario MVCC Advantageous? Notes
High-concurrency OLTP workload ✅ Yes Enables non-blocking, high-throughput access.
Real-time reporting with consistent snapshots ✅ Yes Supports long-running reads without interference.
Single-writer ETL pipeline ❌ Not Ideal Locking is simpler and more efficient.
Strict cross-row business rules ⚠️ Depends Requires serializable isolation or application-side checks.
Disk-constrained, write-heavy environment ❌ Not Ideal MVCC’s versioning can consume significant space.
SaaS multi-tenant platform ✅ Often Especially if each tenant runs isolated read-write sessions.

 

Frequently Asked Questions (FAQ) on MVCC

 

What does MVCC stand for and what does it solve?

MVCC stands for Multiversion Concurrency Control. It solves the problem of simultaneous access to data in a database, letting readers and writers work concurrently without blocking each other — a major improvement over traditional locking.

How is MVCC different from locking mechanisms?

MVCC creates multiple versions of rows, allowing each transaction to see a consistent snapshot. Locking mechanisms block access until the resource is free. MVCC favors concurrency and responsiveness, while locking favors simplicity and immediate consistency.

Does MVCC completely eliminate the need for locks?

Not entirely. Most MVCC systems still use lightweight locks for some internal operations (e.g., index updates, metadata changes). But for row-level data access, MVCC eliminates blocking reads and writes.

Is MVCC always better than locking?

Not always. MVCC excels in high-concurrency environments but comes with trade-offs like storage overhead and cleanup complexity. Locking may be better for simple, low-concurrency workloads that require strict write consistency.

Why does PostgreSQL need VACUUM in MVCC?

Because PostgreSQL creates a new tuple for each update, it accumulates obsolete versions. VACUUM reclaims space by removing those dead tuples. Without regular vacuuming, tables bloat and queries slow down.

How does MVCC affect read performance?

MVCC often improves read performance by avoiding lock contention. Readers don’t wait for writers, so queries return faster. However, if old versions pile up and aren’t cleaned, performance may degrade.

Can MVCC lead to anomalies or inconsistent data?

MVCC provides snapshot isolation, which avoids many anomalies like dirty reads and non-repeatable reads. However, it doesn't guarantee full serializability unless explicitly configured (e.g., Serializable mode in PostgreSQL).

Does MVCC increase write latency?

Yes, it can. Each write involves creating a new version, updating indexes, and triggering cleanup logic. The cost varies by implementation, but MVCC generally trades some write efficiency for improved concurrency.

What are the best practices for managing MVCC overhead?

  • Tune cleanup mechanisms (e.g., VACUUM in PostgreSQL, undo retention in Oracle).

  • Avoid long-running transactions — they keep old versions alive.

  • Monitor disk usage and transaction age regularly.

  • Use appropriate isolation levels to minimize unnecessary version creation.

Which isolation levels are supported by MVCC?

MVCC typically supports Read Committed, Repeatable Read, and Serializable. Each offers different trade-offs between consistency and concurrency. Snapshot isolation (a form of Repeatable Read) is the default in many MVCC systems.