[Philipp Woelfel]PWblue \setauthormarkup[right]
Linearizability is the gold standard among algorithm designers for deducing the correctness of a distributed algorithm using implemented shared objects from the correctness of the corresponding algorithm using atomic versions of the same objects. We show that linearizability does not suffice for this purpose when processes can exploit randomization, and we discuss the existence of alternative correctness conditions. This paper makes the following contributions:

Various examples demonstrate that using wellknown linearizable implementations of objects (e.g., snapshots) in place of atomic objects can change the probability distribution of the outcomes that the adversary is able to generate. In some cases, an oblivious adversary can create a probability distribution of outcomes for an algorithm with implemented, linearizable objects, that not even a strong adversary can generate for the same algorithm with atomic objects.

A new correctness condition for shared object implementations, called strong linearizability, is defined. We prove that a strong adversary (i.e., one that sees the outcome of each coin flip immediately) gains no additional power when atomic objects are replaced by strongly linearizable implementations. In general, no strictly weaker correctness condition suffices to ensure this. We also show that strong linearizability is a local and composable property.

In contrast to the situation for the strong adversary, for a natural weaker adversary (one that cannot see a process’ coin flip until its next operation on a shared object) we prove that there is no correspondingly general correctness condition. Specifically, any linearizable implementation of counters from atomic registers and loadlinked/storeconditional objects, that satisfies a natural locality property, necessarily gives the weak adversary more power than it has with atomic counters.
1 Introduction
Linearizability is the gold standard among algorithm designers for deducing the correctness of a distributed algorithm using implemented shared objects from the correctness of the corresponding algorithm using atomic^{1}^{1}1 In this paper, an atomic operation is one that happens instantaneously, i.e., it is indivisible. But in the literature, the notion of atomicity is not used consistently. E.g., in her textbook [Lynch_DistributedAlgorithms1996], Lynch defines atomic objects to be linearizable, but Anderson and Gouda [journals/ipl/AndersonG88] define atomicity in terms of instantaneous operations. versions of the same objects. We explore this in more detail, showing that linearizability does not suffice for this purpose when processes can exploit randomization.
In an asynchronous distributed system, processes collaborate by executing an algorithm that applies operations to a collection of shared objects. If the operations on these objects are atomic, then the result of the execution is the same as some sequential execution that could arise from an arbitrary interleaving of the processes’ steps. Alternatively, some objects could be replaced by a set of software methods for the different operations on those objects. Processes would then invoke the appropriate method in order to simulate the intended atomic operation. In this case, there is a finer granularity to the interleaving of process steps. Consequently, we need to be sure that each possible result (e.g., the algorithm’s return value for each process) that can arise from using the software methods could also have arisen if the operations were atomic.
This requirement is ensured if the methods provided for each object constitute a linearizable implementation [her:lin] of the object. Linearizability is an especially useful and important correctness condition because it is a local property. That is, if each object in a collection of objects is replaced by its linearizable implementation, then the result of any execution that can arise from the concurrent use of the whole collection is one that could have also happened if the objects were atomic.
Linearizable implementations, however, do not preserve the probability distribution of the possible results as we transform the atomic system to the implemented one. An adversary, which schedules process steps, can “stretch out” a method call that was originally an atomic operation, and concurrently inspect the outcome of other processes’ coin flips. Based on the outcomes, the scheduler can choose between alternative executions of the ongoing method call. As we will illustrate through examples, the consequences of this additional flexibility can be powerful and subtle, allowing the behaviour of the implemented system to differ dramatically from that of the atomic system. In particular, the adversary can manipulate executions so that lowprobability worstcase results in the atomic system become much more probable in the implemented system.
We will see that our ability to curtail an adversary’s additional power, which it can gain when atomic objects are replaced by linearizable implementations, depends in part upon the original power of the adversary. Various adversaries have been defined in literature, differing in their ability to base scheduling decisions on the random choices made by the algorithm (see [Aspnes2003_DistrComp] for an overview of adversary models). The main results in this paper concern two adversary models. Informally, when a process is scheduled by a strong adversary, the process executes only its next atomic operation, whether on a local or a shared object. (Coins are local objects.) When a process is scheduled by a weak adversary it executes up to and including its next step on a shared object. Thus, a strong adversary can intervene between a coin flip and the next step by the same process, whereas a weak adversary cannot. Further discussion of these adversaries, including formal definitions, appears in Section LABEL:model.sec.
Summary of contributions
1. Several examples demonstrate that using linearizable implemented objects in place of atomic objects in randomized algorithms allows the adversary to change the probability distribution of results. Therefore, in order to safely use implemented objects in randomized algorithms, it does not suffice to simply claim that these implementations are linearizable.
2. A new correctness condition for shared object implementations, called strong linearizability, which is strictly stronger than linearizability, is defined. We prove that a strong adversary against a randomized algorithm using strongly linearizable objects has exactly the same power as a strong adversary against the same algorithm using atomic objects. Conversely, if the set of histories that arise from a strong adversary scheduling an algorithm with implemented linearizable objects is “equivalent” to the set of histories that can arise from some strong adversary scheduling the same algorithm with atomic objects, then the former set of histories must be strongly linearizable. We also show that several known universal constructions of linearizable objects with common progress properties (e.g., waitfreedom) provide strong linearizability. Finally, we prove that strong linearizability, like linearizability, is both a local and a composable property.
3. In contrast to the situation for strong adversaries, for weak adversaries strong linearizability has no counterpart. For example, for some randomized algorithms, weak adversaries always gain additional power when strong counters (that support fetch&inc and fetch&dec operations) are replaced with “natural” linearizable implementations based on a set of base objects supporting reads, writes and loadlinked/storeconditional operations. Consequently, to prevent weak adversaries from gaining additional power, the implementation of the counter would require additional base object types beyond what is necessary for linearizability. This result is obtained by a technically involved proof; it holds even for randomized implementations with fairly weak progress conditions (e.g., lockfreedom).
Randomization has become an important technique in the design of distributed algorithms; it allows us to circumvent some substantial impossibilities and complexity lower bounds of deterministic algorithms. Our results impact the design of randomized algorithms that use shared objects not directly supported through atomic primitives in hardware. First, simulating the required shared objects in software using “only” linearizable implementations can break the algorithm. Second, such algorithms are much easier to fix (using strong linearizability) if they are designed from the outset to work against strong adversaries, but not so if they are designed only to work against weak adversaries. Third, since there are strongly linearizable universal constructions using consensus objects, which can be implemented using compare&swap, any system that provides compare&swap in hardware can implement any object in a strongly linearizable way.
2 Examples
We begin with two examples to provide intuition and motivation, and delay the model details, which are needed for our technical results, until the next section. The examples illustrate how an adversary in a randomized algorithm gains additional power when atomic objects are replaced with implemented ones.
Atomic versus linearizable snapshots.
An process snapshot object is a vector of length that supports the atomic operations and by any process . Operation writes to while leaving all unchanged; and returns the vector of values () to .
Initialize a snapshot object for three processes to . Suppose the processes and are executing the following code, and the adversary is trying to minimize the sum of the values returned in ’s scan.
:
: ;
: ; uniformrandom;
To keep the sum in ’s scan low, the adversary can schedule either both or neither of ’s update operations before ’s scan. If the adversary is weak, the same holds for ’s update operations. Thus, under the best strategy for a weak adversary, the expected value of the sum in ’s scan is 0. If the adversary is strong, its best strategy is to schedule ’s scan before ’s second update if ’s coin flip returns 1 and after if it returns . Thus, under the best strategy for a strong adversary, the expected value of the sum in ’s scan is .
Now suppose instead that update and scan are implemented from atomic registers by the wellknown waitfree linearizable algorithm due to Afek, Attiya, Dolev, Gafni, Merritt and Shavit [aadgms:snapshots]. In this algorithm, the snapshot object is implemented as an array of registers. Let a collect denote a series of atomic reads, one for each element of , in some fixed order. To perform a scan, each process repeatedly collects until either two successive collects are identical (a successful double collect), or observes that another process, say , has executed at least two update operations to during ’s scan. In the second case, returns the last scan written (as we explain shortly) by during an update (a borrowed scan). To perform an update, each process must first perform a scan and then write the result of the scan together with its update argument into . This ensures that if a scan has enough failed double collects, then a borrowed scan is possible. With this implementation, the adversary can maneuver , and as shown in Figure 1.