PostgreSQL’s Multi-Version Concurrency Control (MVCC) architecture represents one of the most sophisticated approaches to handling concurrent database operations. By maintaining multiple versions of data rows and providing each transaction with a consistent snapshot of the database state, MVCC allows readers and writers to operate without blocking each other. However, despite these elegant mechanisms, race conditions can still emerge in specific scenarios, particularly when multiple transactions attempt to modify the same data concurrently.
Understanding and mitigating these race conditions is crucial for building robust, high-performance applications that can handle concurrent workloads safely and efficiently.
Understanding Race Conditions in MVCC Context
The Foundation: How MVCC Works
MVCC operates on the principle of snapshot isolation, where each transaction sees a consistent view of the database as it existed at the transaction’s start time. This is achieved through:
- Transaction IDs (XIDs): Each transaction receives a unique identifier
- Row Versioning: Multiple versions of the same logical row can coexist
- Visibility Rules: Transactions can only see data committed before their snapshot time
- Tuple Headers: Each row contains metadata about which transactions can see it
When Race Conditions Occur
Despite MVCC’s robustness, race conditions manifest in several critical scenarios:
1. Read-Modify-Write Conflicts; When multiple transactions read the same data, perform calculations, and attempt to write back results, the classical “lost update” problem occurs. Each transaction operates on potentially stale data, leading to incorrect final states.
2. Existence Checks and Insertions; The “phantom read” phenomenon becomes problematic when transactions check for the absence of data before inserting. Between the check and insert operations, another transaction might create the very data being tested for absence.
3. Concurrent Row Updates; When multiple transactions target the same row for updates, PostgreSQL’s row-level locking mechanisms engage, but without proper handling, this can lead to blocking, deadlocks, or serialization failures.
4. Aggregate Operations; Operati ons that depend on aggregate values (sums, counts, etc.) across multiple rows are particularly vulnerable, as the underlying data may change between reading and acting upon the aggregate.
Comprehensive Strategies for Race Condition Mitigation
1. Pessimistic Locking: SELECT … FOR UPDATE
Pessimistic locking provides the strongest guarantee against concurrent modifications by explicitly locking rows before modification.
-- Basic row lockingBEGIN;SELECT account_id, balance, last_updatedFROM accountsWHERE account_id = $1FOR UPDATE;-- Perform business logic validation-- Update with confidence that no other transaction can interfereUPDATE accountsSET balance = balance - $2,last_updated = NOW()WHERE account_id = $1;COMMIT;
Lock Variants and Their Use Cases:
-- FOR UPDATE: Full row lock, blocks all concurrent updatesSELECT * FROM orders WHERE id = 123 FOR UPDATE;-- FOR NO KEY UPDATE: Allows key updates but blocks non-key updatesSELECT * FROM products WHERE sku = 'ABC123' FOR NO KEY UPDATE;-- FOR SHARE: Allows concurrent reads but blocks writesSELECT * FROM inventory WHERE product_id = 456 FOR SHARE;-- FOR KEY SHARE: Allows all operations except key updatesSELECT * FROM references WHERE target_id = 789 FOR KEY SHARE;
Advanced Locking Patterns:
-- Conditional locking with NOWAITBEGIN;SELECT * FROM critical_resourceWHERE id = $1FOR UPDATE NOWAIT;-- Fails immediately if lock cannot be acquired-- Skip locked rows patternSELECT * FROM work_queueWHERE status = 'pending'FOR UPDATE SKIP LOCKEDLIMIT 10;-- Useful for job queue processors
2. Atomic Operations: INSERT … ON CONFLICT
The UPSERT pattern eliminates race conditions by making check-and-insert operations atomic.
-- Basic upsertINSERT INTO user_sessions (user_id, session_token, expires_at)VALUES ($1, $2, $3)ON CONFLICT (user_id)DO UPDATE SETsession_token = EXCLUDED.session_token,expires_at = EXCLUDED.expires_at,updated_at = NOW();
Complex Conflict Resolution:
-- Conditional updates in conflict resolutionINSERT INTO product_inventory (product_id, quantity, reserved)VALUES ($1, $2, 0)ON CONFLICT (product_id)DO UPDATE SETquantity = CASEWHEN product_inventory.quantity + EXCLUDED.quantity >= 0THEN product_inventory.quantity + EXCLUDED.quantityELSE product_inventory.quantityEND,last_restocked = CASEWHEN EXCLUDED.quantity > 0THEN NOW()ELSE product_inventory.last_restockedENDWHERE product_inventory.quantity + EXCLUDED.quantity >= 0;
3. Application-Level Synchronization: Advisory Locks
Advisory locks provide application-controlled synchronization mechanisms beyond row-level locking.
-- Transaction-level advisory locksBEGIN;SELECT pg_advisory_xact_lock(hashtext('user_' || $1));-- Critical section: complex multi-table operations-- Lock automatically released on transaction endCOMMIT;-- Session-level advisory locks for longer operationsSELECT pg_advisory_lock(12345);-- Perform long-running maintenanceSELECT pg_advisory_unlock(12345);
Practical Advisory Lock Patterns:
-- Non-blocking advisory lock attemptDO $$BEGINIF pg_try_advisory_xact_lock(hashtext('daily_report_' || CURRENT_DATE)) THEN-- Generate reportINSERT INTO daily_reports (date, data)VALUES (CURRENT_DATE, calculate_daily_stats());ELSERAISE NOTICE 'Daily report generation already in progress';END IF;END $$;
4. Isolation Level Management: SERIALIZABLE
The SERIALIZABLE isolation level provides the highest level of consistency by detecting serialization anomalies and forcing transaction retries.
-- Serializable transaction with retry logicCREATE OR REPLACE FUNCTION transfer_funds(from_account INTEGER,to_account INTEGER,amount DECIMAL) RETURNS BOOLEAN AS $$DECLAREmax_retries INTEGER := 3;retry_count INTEGER := 0;BEGINLOOPBEGIN-- Start serializable transactionSET TRANSACTION ISOLATION LEVEL SERIALIZABLE;-- Validate source accountUPDATE accountsSET balance = balance - amountWHERE account_id = from_accountAND balance >= amount;IF NOT FOUND THENRAISE EXCEPTION 'Insufficient funds';END IF;-- Credit destination accountUPDATE accountsSET balance = balance + amountWHERE account_id = to_account;-- Success - exit retry loopRETURN TRUE;EXCEPTIONWHEN serialization_failure THENretry_count := retry_count + 1;IF retry_count >= max_retries THENRAISE EXCEPTION 'Transaction failed after % retries', max_retries;END IF;-- Brief delay before retryPERFORM pg_sleep(0.1 * retry_count);END;END LOOP;END;$$ LANGUAGE plpgsql;
5. Optimistic Concurrency Control
Optimistic locking assumes conflicts are rare and detects them at update time rather than preventing them.
-- Version-based optimistic lockingCREATE TABLE documents (id SERIAL PRIMARY KEY,content TEXT,version INTEGER NOT NULL DEFAULT 1,updated_at TIMESTAMP DEFAULT NOW());-- Update function with version checkCREATE OR REPLACE FUNCTION update_document(doc_id INTEGER,new_content TEXT,expected_version INTEGER) RETURNS BOOLEAN AS $$DECLARErows_affected INTEGER;BEGINUPDATE documentsSET content = new_content,version = version + 1,updated_at = NOW()WHERE id = doc_idAND version = expected_version;GET DIAGNOSTICS rows_affected = ROW_COUNT;IF rows_affected = 0 THEN-- Version mismatch - document was modifiedRETURN FALSE;END IF;RETURN TRUE;END;$$ LANGUAGE plpgsql;
Using System Columns for Optimistic Locking:
-- Using xmin system column as version indicatorUPDATE user_preferencesSET theme = $2,updated_at = NOW()WHERE user_id = $1AND xmin::text = $3; -- $3 contains the xmin value from initial SELECT
Advanced Patterns and Best Practices
Minimizing Lock Contention
-- Bad: Long-running transaction holding locksBEGIN;SELECT * FROM hot_table WHERE id = 123 FOR UPDATE;-- ... extensive processing ...UPDATE hot_table SET value = new_value WHERE id = 123;COMMIT;-- Better: Minimize critical section-- Process data outside transactionlocal_data := process_data(input);BEGIN;SELECT * FROM hot_table WHERE id = 123 FOR UPDATE;UPDATE hot_table SET value = local_data WHERE id = 123;COMMIT;
Batch Processing with Proper Locking
-- Process multiple items with proper lockingWITH locked_items AS (SELECT id, dataFROM work_queueWHERE status = 'pending'ORDER BY priority DESCFOR UPDATE SKIP LOCKEDLIMIT 100)UPDATE work_queueSET status = 'processing',started_at = NOW()FROM locked_itemsWHERE work_queue.id = locked_items.idRETURNING work_queue.id, work_queue.data;
Handling Deadlocks Gracefully
-- Deadlock-aware transaction wrapperCREATE OR REPLACE FUNCTION execute_with_deadlock_retry(operation TEXT,max_retries INTEGER DEFAULT 3) RETURNS VOID AS $$DECLAREretry_count INTEGER := 0;BEGINLOOPBEGINEXECUTE operation;EXIT; -- SuccessEXCEPTIONWHEN deadlock_detected THENretry_count := retry_count + 1;IF retry_count >= max_retries THENRAISE EXCEPTION 'Deadlock persists after % retries', max_retries;END IF;-- Randomized exponential backoffPERFORM pg_sleep(0.1 * power(2, retry_count) * random());END;END LOOP;END;$$ LANGUAGE plpgsql;
Performance Considerations and Monitoring
Lock Monitoring Queries
-- Monitor active locksSELECTl.locktype,l.database,l.relation::regclass,l.page,l.tuple,l.transactionid,l.mode,l.granted,a.query,a.query_start,a.stateFROM pg_locks lJOIN pg_stat_activity a ON l.pid = a.pidWHERE NOT l.grantedORDER BY l.granted, a.query_start;-- Identify blocking relationshipsSELECTblocked_locks.pid AS blocked_pid,blocked_activity.usename AS blocked_user,blocking_locks.pid AS blocking_pid,blocking_activity.usename AS blocking_user,blocked_activity.query AS blocked_statement,blocking_activity.query AS blocking_statementFROM pg_catalog.pg_locks blocked_locksJOIN pg_catalog.pg_stat_activity blocked_activity ON blocked_locks.pid = blocked_activity.pidJOIN pg_catalog.pg_locks blocking_locks ON (blocked_locks.locktype = blocking_locks.locktypeAND blocked_locks.DATABASE IS NOT DISTINCT FROM blocking_locks.DATABASEAND blocked_locks.relation IS NOT DISTINCT FROM blocking_locks.relationAND blocked_locks.page IS NOT DISTINCT FROM blocking_locks.pageAND blocked_locks.tuple IS NOT DISTINCT FROM blocking_locks.tupleAND blocked_locks.transactionid IS NOT DISTINCT FROM blocking_locks.transactionid)JOIN pg_catalog.pg_stat_activity blocking_activity ON blocking_locks.pid = blocking_activity.pidWHERE NOT blocked_locks.granted;
Summary and Decision Matrix
| Strategy | Use Case | Pros | Cons | Performance Impact |
|---|---|---|---|---|
| SELECT … FOR UPDATE | Preventing concurrent updates to critical data | Strong consistency, simple to implement | Can cause blocking, potential deadlocks | Medium – holds locks longer |
| INSERT … ON CONFLICT | Atomic insert-or-update operations | Eliminates race conditions, atomic | Limited to single-table operations | Low – single atomic operation |
| Advisory Locks | Application-level coordination | Flexible, custom logic | Requires careful management | Low – lightweight |
| SERIALIZABLE Isolation | Maximum consistency requirements | Strongest guarantees | Requires retry logic, can be expensive | High – may cause frequent retries |
| Optimistic Locking | Low contention scenarios | Non-blocking, good for read-heavy workloads | Requires application retry logic | Very Low – no locks held |
The choice of strategy depends on your specific use case, performance requirements, and consistency needs. Often, a combination of approaches works best, using different strategies for different parts of your application based on their concurrency patterns and requirements.
Example Handling of Race Condition in Bank Account System
The bank account system example demonstrates a real-world scenario where multiple race conditions can occur simultaneously. Let’s dissect each component and understand how it addresses specific concurrency challenges.
System Architecture Overview
The system consists of two main tables and several stored procedures designed to handle concurrent financial operations safely:
Core Tables Structure
-- Account table with proper constraintsCREATE TABLE accounts (account_id SERIAL PRIMARY KEY,customer_id INTEGER NOT NULL,balance DECIMAL(15,2) NOT NULL CHECK (balance >= 0),version INTEGER NOT NULL DEFAULT 1,created_at TIMESTAMP DEFAULT NOW(),updated_at TIMESTAMP DEFAULT NOW());-- Transaction log for audit trailCREATE TABLE account_transactions (transaction_id SERIAL PRIMARY KEY,from_account INTEGER,to_account INTEGER,amount DECIMAL(15,2) NOT NULL,transaction_type VARCHAR(20) NOT NULL,status VARCHAR(20) NOT NULL DEFAULT 'pending',created_at TIMESTAMP DEFAULT NOW(),completed_at TIMESTAMP);
Key Design Decisions Explained
1. Balance Constraint: CHECK (balance >= 0)
- Purpose: Prevents negative balances at the database level
- Race Condition Prevention: Even if application logic fails, the database ensures data integrity
- Impact: Failed transactions will roll back rather than create invalid states
2. Version Column: version INTEGER NOT NULL DEFAULT 1
- Purpose: Enables optimistic locking for scenarios where pessimistic locking isn’t suitable
- Usage: Incremented on every update, allows detection of concurrent modifications
- Benefit: Provides a lightweight alternative to row locking for read-heavy scenarios
3. Audit Trail: account_transactions Table
- Purpose: Maintains complete history of all financial operations
- Concurrency Benefit: Allows transaction status tracking without locking account records
- Compliance: Essential for financial systems requiring audit trails
Race Condition Scenarios Addressed
Scenario 1: Concurrent Withdrawals
The Problem:
Time | Transaction A | Transaction B-----|---------------------|--------------------T1 | READ balance: $100 | READ balance: $100T2 | Withdraw $80 | Withdraw $60T3 | UPDATE balance: $20 | UPDATE balance: $40
Result: Final balance is $40, but $140 was withdrawn from $100!
How Our System Prevents This:
-- In withdraw_funds functionSELECT balance INTO current_balanceFROM accountsWHERE account_id = p_account_idFOR UPDATE; -- ← This prevents the race condition
The FOR UPDATE clause ensures that only one transaction can read and modify the balance at a time. Transaction B will wait until Transaction A completes.
Scenario 2: Double-Spending in Transfers
The Problem: Two concurrent transfers from the same account could both read the same balance and both succeed, even if the combined amount exceeds the available balance.
How Our System Prevents This:
-- Lock accounts in consistent order to prevent deadlocksIF p_from_account < p_to_account THENSELECT balance INTO from_balanceFROM accountsWHERE account_id = p_from_accountFOR UPDATE;PERFORM 1 FROM accountsWHERE account_id = p_to_accountFOR UPDATE;ELSEPERFORM 1 FROM accountsWHERE account_id = p_to_accountFOR UPDATE;SELECT balance INTO from_balanceFROM accountsWHERE account_id = p_from_accountFOR UPDATE;END IF;
Key Insights:
- Consistent Lock Ordering: Always lock accounts in the same order (by ID) to prevent deadlocks
- Both Accounts Locked: Prevents interference from other transactions affecting either account
- Atomic Operation: The entire transfer happens as one atomic unit
Detailed Function Analysis
1. withdraw_funds Function
CREATE OR REPLACE FUNCTION withdraw_funds(p_account_id INTEGER,p_amount DECIMAL,p_transaction_id INTEGER) RETURNS BOOLEAN AS $$DECLAREcurrent_balance DECIMAL;transaction_completed BOOLEAN := FALSE;BEGIN-- Start with appropriate isolation levelSET TRANSACTION ISOLATION LEVEL READ COMMITTED;BEGIN-- Lock the account rowSELECT balance INTO current_balanceFROM accountsWHERE account_id = p_account_idFOR UPDATE;-- Validate sufficient fundsIF current_balance < p_amount THENUPDATE account_transactionsSET status = 'failed',completed_at = NOW()WHERE transaction_id = p_transaction_id;RETURN FALSE;END IF;-- Perform withdrawalUPDATE accountsSET balance = balance - p_amount,version = version + 1,updated_at = NOW()WHERE account_id = p_account_id;-- Update transaction logUPDATE account_transactionsSET status = 'completed',completed_at = NOW()WHERE transaction_id = p_transaction_id;transaction_completed := TRUE;EXCEPTIONWHEN OTHERS THEN-- Mark transaction as failedUPDATE account_transactionsSET status = 'failed',completed_at = NOW()WHERE transaction_id = p_transaction_id;RAISE;END;RETURN transaction_completed;END;$$ LANGUAGE plpgsql;
Step-by-Step Breakdown:
Step 1: Isolation Level
SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
- Why READ COMMITTED: Balances consistency with performance
- Alternative: Could use REPEATABLE READ or SERIALIZABLE for stricter consistency
- Trade-off: Higher isolation levels reduce concurrency but increase consistency
Step 2: Pessimistic Locking
SELECT balance INTO current_balanceFROM accountsWHERE account_id = p_account_idFOR UPDATE;
- Critical Section: This creates a critical section around the account
- Blocking Behavior: Other transactions attempting to modify this account will wait
- Lock Duration: Lock held until transaction commits or rolls back
Step 3: Business Logic Validation
IF current_balance < p_amount THENUPDATE account_transactionsSET status = 'failed',completed_at = NOW()WHERE transaction_id = p_transaction_id;RETURN FALSE;END IF;
- Validation Within Lock: Ensures no other transaction can change balance during validation
- Audit Trail: Failed attempts are recorded for compliance and debugging
- Early Exit: Prevents unnecessary processing if operation cannot succeed
Step 4: Atomic Update
UPDATE accountsSET balance = balance - p_amount,version = version + 1,updated_at = NOW()WHERE account_id = p_account_id;
- Atomic Operation: Balance update and version increment happen together
- Version Tracking: Enables detection of concurrent modifications
- Timestamp Update: Maintains audit trail of when changes occurred
2. transfer_funds Function Deep Dive
The transfer function is more complex because it involves multiple accounts and potential deadlock scenarios:
-- Lock accounts in consistent order to prevent deadlocksIF p_from_account < p_to_account THENSELECT balance INTO from_balanceFROM accountsWHERE account_id = p_from_accountFOR UPDATE;PERFORM 1 FROM accountsWHERE account_id = p_to_accountFOR UPDATE;ELSEPERFORM 1 FROM accountsWHERE account_id = p_to_accountFOR UPDATE;SELECT balance INTO from_balanceFROM accountsWHERE account_id = p_from_accountFOR UPDATE;END IF;
Deadlock Prevention Strategy
The Deadlock Scenario:
Time | Transaction A (Transfer 1→2) | Transaction B (Transfer 2→1)-----|------------------------------|------------------------------T1 | Lock Account 1 | Lock Account 2T2 | Try to lock Account 2 (WAIT) | Try to lock Account 1 (WAIT)T3 | ← DEADLOCK DETECTED → | ← DEADLOCK DETECTED →
Our Solution:
- Consistent Ordering: Always lock accounts in ascending ID order
- Global Rule: All transfers follow the same locking order
- Result: Impossible to create circular wait conditions
Retry Logic for Resilience
-- Retry loop for handling serialization failuresLOOPBEGIN-- ... main transaction logic ...EXIT; -- SuccessEXCEPTIONWHEN serialization_failure OR deadlock_detected THENretry_count := retry_count + 1;IF retry_count >= max_retries THENUPDATE account_transactionsSET status = 'failed', completed_at = NOW()WHERE transaction_id = transaction_id;RAISE EXCEPTION 'Transfer failed after % retries', max_retries;END IF;-- Exponential backoff with jitterPERFORM pg_sleep(0.1 * power(2, retry_count) * (0.5 + random() * 0.5));END;END LOOP;
Retry Strategy Components:
1. Exception Handling
- Specific Exceptions: Only retries on serialization failures and deadlocks
- Other Errors: Business logic errors (insufficient funds) are not retried
- Fail-Safe: After max retries, transaction is marked as failed
2. Exponential Backoff
- Base Delay: 0.1 seconds
- Exponential Growth: Doubles with each retry
- Jitter: Random factor (0.5-1.0) prevents thundering herd
3. Audit Trail Maintenance
- Failed Transactions: Recorded even after retries fail
- Status Tracking: Complete lifecycle of each transaction attempt
Advanced Concurrency Patterns Demonstrated
1. Two-Phase Locking Protocol
The system implements a variant of two-phase locking:
-- Growing Phase: Acquire all locksSELECT balance FROM accounts WHERE account_id = p_from_account FOR UPDATE;PERFORM 1 FROM accounts WHERE account_id = p_to_account FOR UPDATE;-- Working Phase: Perform operationsUPDATE accounts SET balance = balance - p_amount WHERE account_id = p_from_account;UPDATE accounts SET balance = balance + p_amount WHERE account_id = p_to_account;-- Shrinking Phase: Release locks (automatic on COMMIT)COMMIT;
2. Optimistic Concurrency Control Alternative
For read-heavy scenarios, you could implement optimistic locking:
-- Alternative withdrawal using optimistic lockingCREATE OR REPLACE FUNCTION withdraw_funds_optimistic(p_account_id INTEGER,p_amount DECIMAL,p_expected_version INTEGER) RETURNS BOOLEAN AS $$DECLARErows_affected INTEGER;BEGIN-- Attempt optimistic updateUPDATE accountsSET balance = balance - p_amount,version = version + 1,updated_at = NOW()WHERE account_id = p_account_idAND version = p_expected_versionAND balance >= p_amount;GET DIAGNOSTICS rows_affected = ROW_COUNT;IF rows_affected = 0 THEN-- Either version mismatch or insufficient fundsRETURN FALSE;END IF;RETURN TRUE;END;$$ LANGUAGE plpgsql;
Usage Pattern:
-- Client-side retry logicDO $$DECLAREcurrent_version INTEGER;current_balance DECIMAL;success BOOLEAN := FALSE;max_retries INTEGER := 5;retry_count INTEGER := 0;BEGINWHILE NOT success AND retry_count < max_retries LOOP-- Read current stateSELECT balance, version INTO current_balance, current_versionFROM accounts WHERE account_id = 123;-- Validate business rulesIF current_balance < 50.00 THENRAISE EXCEPTION 'Insufficient funds';END IF;-- Attempt optimistic updatesuccess := withdraw_funds_optimistic(123, 50.00, current_version);IF NOT success THENretry_count := retry_count + 1;PERFORM pg_sleep(0.01); -- Brief pauseEND IF;END LOOP;IF NOT success THENRAISE EXCEPTION 'Withdrawal failed after % retries', max_retries;END IF;END $$;
Performance Implications and Monitoring
1. Lock Contention Analysis
High-Contention Scenario:
-- Many concurrent operations on popular accountsSELECT account_id, COUNT(*) as operation_countFROM account_transactionsWHERE created_at > NOW() - INTERVAL '1 hour'GROUP BY account_idORDER BY operation_count DESC;
Mitigation Strategies:
- Account Sharding: Distribute load across multiple account records
- Batch Processing: Group multiple operations together
- Read Replicas: Use read-only replicas for balance inquiries
2. Deadlock Monitoring
-- Monitor deadlock frequencySELECTdatname,deadlocks,deadlocks / (blks_read + blks_hit) * 100 as deadlock_percentageFROM pg_stat_databaseWHERE datname = current_database();
3. Transaction Duration Analysis
-- Identify long-running transactionsSELECTpid,now() - pg_stat_activity.query_start AS duration,query,stateFROM pg_stat_activityWHERE (now() - pg_stat_activity.query_start) > interval '5 minutes'AND state != 'idle';
Real-World Adaptations
1. Microservices Architecture
In a microservices environment, you might need distributed locking:
-- Using advisory locks for distributed coordinationCREATE OR REPLACE FUNCTION distributed_transfer(p_from_account INTEGER,p_to_account INTEGER,p_amount DECIMAL,p_service_instance_id TEXT) RETURNS INTEGER AS $$DECLARElock_key BIGINT;transaction_id INTEGER;BEGIN-- Create distributed lock keylock_key := hashtext(p_service_instance_id || '_' ||LEAST(p_from_account, p_to_account) || '_' ||GREATEST(p_from_account, p_to_account));-- Acquire distributed lockIF NOT pg_try_advisory_xact_lock(lock_key) THENRAISE EXCEPTION 'Could not acquire distributed lock';END IF;-- Proceed with transfer logicSELECT transfer_funds(p_from_account, p_to_account, p_amount)INTO transaction_id;RETURN transaction_id;END;$$ LANGUAGE plpgsql;
2. High-Frequency Trading Adaptations
For extremely high-frequency scenarios:
-- Optimized for minimal lock timeCREATE OR REPLACE FUNCTION hft_transfer(p_from_account INTEGER,p_to_account INTEGER,p_amount DECIMAL) RETURNS BOOLEAN AS $$BEGIN-- Pre-validate outside of critical sectionIF NOT EXISTS (SELECT 1 FROM accountsWHERE account_id = p_from_accountAND balance >= p_amount) THENRETURN FALSE;END IF;-- Minimal critical sectionBEGINWITH account_updates AS (SELECT account_id,CASE WHEN account_id = p_from_accountTHEN balance - p_amountELSE balance + p_amountEND as new_balanceFROM accountsWHERE account_id IN (p_from_account, p_to_account)FOR UPDATE)UPDATE accountsSET balance = au.new_balance,version = version + 1FROM account_updates auWHERE accounts.account_id = au.account_idAND au.new_balance >= 0;RETURN FOUND;EXCEPTIONWHEN OTHERS THENRETURN FALSE;END;END;$$ LANGUAGE plpgsql;
Testing Race Conditions
1. Concurrent Load Testing
-- Test script to simulate concurrent operationsDO $$DECLAREi INTEGER;BEGIN-- Create test accountsFOR i IN 1..100 LOOPINSERT INTO accounts (customer_id, balance)VALUES (i, 1000.00);END LOOP;-- Simulate concurrent transfersFOR i IN 1..1000 LOOPPERFORM pg_background_launch('SELECT transfer_funds($1, $2, $3)',(random() * 99 + 1)::INTEGER,(random() * 99 + 1)::INTEGER,(random() * 100 + 1)::DECIMAL);END LOOP;END $$;
2. Consistency Verification
-- Verify system consistencyWITH transaction_summary AS (SELECTSUM(CASE WHEN status = 'completed' THEN amount ELSE 0 END) as total_transferred,COUNT(CASE WHEN status = 'completed' THEN 1 END) as successful_transfers,COUNT(CASE WHEN status = 'failed' THEN 1 END) as failed_transfersFROM account_transactionsWHERE transaction_type = 'transfer'),account_summary AS (SELECT SUM(balance) as total_balanceFROM accounts)SELECTts.total_transferred,ts.successful_transfers,ts.failed_transfers,ac.total_balance,-- Total balance should remain constant in a closed system1000.00 * (SELECT COUNT(*) FROM accounts) as expected_totalFROM transaction_summary tsCROSS JOIN account_summary ac;
This comprehensive bank account system demonstrates how to handle multiple types of race conditions in a real-world scenario, providing both safety and performance through careful application of PostgreSQL’s concurrency control mechanisms.
Leave a Reply