[Unfinished] Review consistency models in my previous distributed computer systems courses

Regarding the consistency model of distributed/concurrency programming, some basic definitions from wikipedia are as follows:

Linearization: In concurrent programming, an operation (or set of operations) is atomic, linearizable, indivisible or uninterruptible if it appears to the rest of the system to occur instantaneously. Atomicity is a guarantee of isolation from concurrent processes. Additionally, atomic operations commonly have a succeed-or-fail definition — they either successfully change the state of the system, or have no apparent effect.

Serialization: In concurrency control of databases,[1][2] transaction processing (transaction management), and various transactional applications (e.g., transactional memory[3] and software transactional memory), both centralized and distributed, a transaction schedule is serializable if its outcome (e.g., the resulting database state) is equal to the outcome of its transactions executed serially, i.e., sequentially without overlapping in time. Transactions are normally executed concurrently (they overlap), since this is the most efficient way. Serializability is the major correctness criterion for concurrent transactions’ executions.

Strict Consistency

Strict consistency is the strongest consistency model. It requires that if a process reads any memory location, the value returned by the read operation is the value written by the most recent write operation to that location. For a uni-processor system, this model makes perfect sense. But it is almost impossible to implement the strict consistency model in distributed shared memory systems. Consider a situation where there are two processors, A and B. Processor A writes a value at a particular time instance and processor B reads that value at a later time. Consider a light cone originating at processor A. If processor A and processor B are placed adjacent to each other on a timeline, the point where a ray of light from this light cone can touch processor B’s timeline determines the instance at which processor B can see the new value of the data written by processor A. If the processor B tries to read the data before this time, it would read the previous value of data, although processor A had already written the new value.

Sequential consistency

Sequential consistency model defined by Lamport(1979)[4] is a weaker memory model than strict consistency.

Linearizability (also known as atomic consistency) can be defined as sequential consistency with the real-time constraint.

Causal Consistency

Causal consistency can be considered a weakening model of sequential consistency by categorizing events to those causally related and those that are not. It defines that only write operations that are causally related must be seen in the same order by all processes.

Release Consistency


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s