# Introduction

When programming distributed systems becomes part of your life, you go through a learning curve. This article tries to describe my current level of understanding of the field, and hopefully points out enough mistakes for you to be able follow the most optimal path to enlightenment: learning from the mistakes of others.

For the record: I entered Level 1 in 1995, and I’m currently Level 3. Where do you see yourself?

# Level 0: Clueless

Every programmer starts here. I will not comment too much here as there isn’t a lot to say. Instead, I quote some conversations I had, and offer some words of advice to developers that never battled distributed systems.

NN1:”replication in distributed systems is easy, you just let all the machines store the item at the same time“

NN1：“在分布式系统中，复制是个很容易的操作，你只需要让所有的结点同时存储你要复制的东东就行了”。

Another conversation (from the back of my memory):

NN: “For our first person shooter, we’re going to write our own networking engine”

NN：“为了我们的第一人称射击游戏，我们得写一个自己的网络处理引擎。”

ME: “Why?”

NN: “There are good commercial engines, but license costs are expensive and we don’t want to pay these.”

ME: “Do you have any experience in distributed systems?”

NN: “Yes, I’ve written a socket server before.”

NN：“是的，我之前写过一个套接字服务器。”

ME: “How long do you think you will take to write it?”

NN: “I think 2 weeks. Just to be really safe we planned 4.”

NN：“我想2周吧。保险起见，我计划用4周时间。”

Sometimes it’s better to remain silent.

# Level 1: RPC

RMI is a very powerful technique for building large systems. The fact that the technique can be described, along with a working example, in just a few pages, speaks volumes of Java. RMI is tremendously exciting and it’s simple to use. You can call to any server you can bind to, and you can build networks of distributed objects. RMI opens the door to software systems that were formerly too complex to build.

RMI是一种非常强大的用来构建大型系统的技术。事实上，这个技术用Java来描述的话，结合一些工作的例子可以在短短几页纸内描述清楚。RMI技术非常令人振奋，而且它很容易使用。你可以调用你所能绑定到的任何服务器资源，而且你可以构建出分布式的网络对象。过去人们常常为构建复杂的软件系统犯难，现在RMI打开了这道大门。

—Peter van der Linden, Just Java (4th edition, Sun Microsystems)

Let me start by saying I’m not dissing this book. I remember disctinctly it was fun to read (especially the anecdotes between the chapters), and I used it for the Java lessons I used to give (In a different universe, a long time ago). In general, I think well of it. His attitude towards RMI however, is typical of Level 1 distributed application design. People that reside here share the vision of unified objects. In fact, Waldo et al describe it in detail in their landmark paper “a note on distributed computing” (1994), but I will summarize here:

The advocated strategy to writing distributed applications is a three phase approach. The first phase is to write the application without worrying about where objects are located and how their communication is implemented. The second phase is to tune performance by “concretizing” object locations and communication methods. The final phase is to test with “real bullets” (partitioned networks, machines going down, …).

The idea is that whether a call is local or remote has no impact on the correctness of a program.

The same paper then disects this further and shows the problems with it. It has thus been known for almost 20 years that this concept is wrong. Anyway, if Java RMI achieved one thing, it’s this: Even if you remove transport protocol, naming and binding and serialization from the equation, it still doesn’t work. People old enough to rember the hell called CORBA will also remember it didn’t work, but they have an excuse: they were still battling all kinds of lower level problems. Java RMI took all of these away and made the remaining issues stick out. There are two of them. The first is a mere annoyance:

## Network Transparency isn’t

Let’s take a look at a simple Java RMI example (taken from the same ‘Just Java’)

1 public interface WeatherIntf extends javva.rmi.Remote{
2   public String getWeather() throws java.rmi.RemoteException;
3 }


A client that wants to use the weather service needs to do something like this:

1 try{
2    Remote robj = Naming.lookup("//localhost/WeatherServer");
3    WeatherIntf weatherserver = (WeatherInf) robj;
4    String forecast = weatherserver.getWeather();
5    System.out.println("The weather will be " + forecast);
6 }catch(Exception e){
7    System.out.println(e.getMessage());
8 }


The client code needs to take RemoteExceptions into account.

If you want to see what kinds of remote failure you can encounter, take a look at the more than 20 subclasses. Ok, so your code will be a tad less pretty. We can live with that.

## Partial Failure

The real problem with RMI is that the call can fail partially. It can fail before the action on the other tier is invoked, or the invocation might succeed but the return value might not make it afterwards, for whatever reason. These failure modes are in fact the very defining property of distributed systems or otherwise stated:

RMI的真正问题在于这些调用可能会出现局部性失败的情况。比如，调用可能会在对其他层的请求操作执行前失败，又或者请求成功了，但之后的返回值又不正确。引起这类局部性失败的原因非常多。其实，这些故障模式正是分布式系统特性的明确定义：

“A distributed system is one in which the failure of a computer you didn’t even know existed can render your own computer unusable”

“分布式系统就是某一台你根本意识不到其存在的计算机，它的故障会造成你的计算机无法正常使用。”

—(Leslie Lamport)

If the method is just the retrieval of a weather forecast, you can simply retry, but if you were trying to increment a counter, retrying can have results ranging from 0 to 2 updates. The solution is supposed to come from idempotent actions, but building those isn’t always possible. Moreover, since you decided on a semantic change of your method call, you basically admit RMI is different from a local invocation. This is an admission of RMI being a fallacy.

In any case the paradigm is a failure as both network transparency and architectural abstraction from distribution just never materialise. It also turns out that some software methodologies are more affected than others. Some variations of scrum tend to prototype. Prototypes concentrate on the happy path and the happy path is not the problem. It basically means you will never escape Level 1. (sorry, this was a low blow. I know)

People who do escape Level 1 understand they need to address the problem with the respect it deserves. They abandon the idea of network transparency, and attack the handling of partial failure strategically.

# Level 2: Distributed Algorithms + Asynchronous messaging + Language support

<dwsarcasm>”Just What We Need: Another RPC Package” </sarcasm>

—(Steve Vinoski)

Ok, you’ve learned the fallacies of distributed computing. You decided to bite the bullet, and model the message passing explicitly to get a control of failure.

OK，你已经学习了分布式计算中的悖论是什么。你决定吞下这颗子弹，然后对消息传递机制建模，以此显式地控制出现失败的情况。

You split your application into 2 layers, the bottom being responsible for networking and message transport, while the upper layer deals with the arrival of messages, and what needs to be done when they do.

The upper layer implements a distributed state machine, and if you ask the designers what it does, they will tell you something like : “It’s a multi-paxos implementation on top of TCP”.

Development-wise, the strategy boils down to this: Programmers first develop the application centrally using threads to simulate the different processes. Each thread runs a part of the distributed state machine, and basically is responsible for running a message handling loop. Once the application is locally complete and correct, the threads are taken away to become real processes on remote computers. At this stage, in the absence of network problems, the distributed application is already working correctly. In a second phase fault tolerance can be straighforwardly achieved by configuring each of the distributed entities to react correctly to failures (I liberally quoted from “A Fault Tolerant Abstraction for Transparent Distributed Programming”).

Partial failure is handled by design, because of the distributed state machine. With regards to threads, there are a lot of options, but you prefer coroutines (they are called fibers, Light weight threads, microthreads, protothreads or just theads in various programming languages, causing a Babylonic confusion) as they allow for fine grained concurrency control.

Combined with the insight that “C ain’t gonna make my network any faster”, you move to programming languages that support this kind of fine grained concurrency. Popular choices are (in arbitrary order)

(Note how they tend to be functional in nature)

As an example, let’s see what such code looks like in Erlang (taken from Erlang concurrent programming)

 1 -module(tut15).
2
3 -export([start/0, ping/2, pong/0]).
4
5 ping(0, Pong_PID) ->
6     Pong_PID ! finished,
7     io:format("ping finished~n", []);
8
9 ping(N, Pong_PID) ->
10     Pong_PID ! {ping, self()},
12         pong ->
14     end,
15     ping(N - 1, Pong_PID).
16
17 pong() ->
19         finished ->
20             io:format("Pong finished~n", []);
21         {ping, Ping_PID} ->
23             Ping_PID ! pong,
24             pong()
25     end.
26
27 start() ->
28     Pong_PID = spawn(tut15, pong, []),
29     spawn(tut15, ping, [3, Pong_PID]).


This definitely looks like a major improvement over plain old RPC. You can start reasoning over what would happen if a message doesn’t arrive.

Erlang gets bonus points for having Timeout messages and a builtin after Timeout construct that lets you model and react to timeouts in an elegant manner.

Erlang还有附加的超时消息以及一个语言内建的“超时”组件，可以使你以一种优雅的方式来处理超时。

So, you picked your strategy, your distributed algorithm, your programming language and start the work. You’re confident you will slay this monster once and for all, as you ain’t no Level 1 wuss anymore.

Alas, somewhere down the road, some time after your first releases, you enter troubled waters. People tell you your distributed application has issues. The reports are all variations on a theme. They start with a frequency indicator like “sometimes” or “once”, and then describe a situation where the system is stuck in an undesirable state. If you’re lucky, you had adequate logging in place and start inspecting the logs. A little later, you discover an unfortunate sequence of events that produced the reported situation. Indeed, it was a new case. You never took this into consideration, and it never appeared during the extensive testing and simulation you did. So you change the code to take this case into account too.

Since you try to think ahead, you decide to build a monkey that pseudo randomly lets your distributed system do silly things. The monkey rattles its cage and quickly you discover a multitude of scenarios that all lead to undesirable situations like being stuck (never reaching consensus) or even worse: reaching an inconsistent state that should never occur.

Having a monkey was a great idea, and it certainly reduces the chance of encountering something you’ve never seen before in the field. Since you believe that a bugfix goes hand in hand with a testcase that first produced the bug, and now proves its demise, you set out to build just that test. Your problem however is reproducing the failure scenario is difficult, if not impossible. You listen to the gods as they hinted when in doubt, use brute force. So you produce a tests that runs a zillion times to compensate the small probability of the failure. This makes your bug fixing process slow and your test suites bulky. You compensate again by doing divide and conquer on your volume of testsets. Anyway, after a heavy investment of effort and time, you somehow manage to get a rather stable system and ditto process.

You’re maxed out on Level 2. Without new insights, you’ll be stuck here forever.

# Level 3: Distributed Algorithms + Asynchronous messaging + Purity

It takes a while to realise that a combination of long running monkeys to discover evil scenarios and brute force to reproduce them ain’t making it. Using brute force just demonstrates ignorance. One of the key insights you need is that if you could only remove indeterminism from the equation, you would have perfect reproducibility of every scenario. A major side effect of Level 2 distributed programming is that your concurrency model tends to go viral on your codebase. You desired fine grained concurrency control… well you got it. It’s everywhere. So concurrency causes indeterminism and indeterminism causes trouble. So concurrency must go. You can’t abandon it: you need it. You just have to ban it from mingling with your distributed state machine. In other words, your distributed state machine has to become a pure function. No IO, No Concurrency, no nothing. Your state machine signature will look something like this

module type SM = sig
type state
type action
type msg
val step: msg -> state -> action * state
end


You pass in a message and a state, and you get an action and a resulting state. An action is basically anything that tries to change the outside world, needs time to do so, and might fail while trying. Typical actions are

• send a message

发送一个消息

• schedule a timeout

安排一次超时

• store something in persistent storage

将数据存储在持久性的存储介质内

The important thing to realise here is that you can only get to a new state via a new message. nothing else. The benefits of such a strict regime are legio. Perfect control, perfect reproducibility and perfect tracibility. The costs are there too. You’re forced to reify all your actions, which basically is an extra level of indirection to reduce your complexity. You also have to model every change of the outside world that needs your attention into a message.

Another change from Level 2 is the change in control flow. At Level 2, a client will try to force an update and set the machinery in motion. Here, the distributed state machine assumes full control, and will only consider a client’s request when it is ready and able to do something useful with it. So these must be detached.

If you explain this to a Level 2 architect, (s)he will more or less accept this as an alternative. It, however, takes a sufficient amount of pain (let’s call it experience or XP) to realize it’s the only feasible alternative.

# Level 4: Solid domination of distributed systems: happiness, piece of mind and a good night’s rest

To be honest, as I’m a mere Level 3 myself, I don’t know what’s up here. I am convinced that both functional programming and asynchronous message passing are parts of the puzzle, but it’s not enough.

Allow me to reiterate what I’m struggling against. First, I want my distributed algorithm implementation to fully cover all possible cases.

This is a big deal to me as I’ve lost lots of sleep being called in on issues in deployed systems (Most of these turn out to be PEBKAC but some were genuine, and cause frustration). It would be great to know your implementation is robust. Should I try theorem provers, should I do exhaustive testing ? I don’t know.

As an aside, for an append only btreeish library called baardskeerder, we know we covered all cases by exhaustively generating insert/delete permutations and asserting their correctness. Here, it’s not that simple, and I’m a bit hesitant to Coqify the codebase.

Second, for reasons of clarity and simplicity, I decided not to touch other, orthogonal requirements like service discovery, authentication, authorization, privacy and performance.

With regard to performance, we might be lucky as the asynchronuous message passing at least doesn’t seem to contradict performance considerations.

Security however is a real bitch as it crosscuts almost everything else you do. Some people think security is a sauce that you can pour over your application to make it secure.

Alas, I never succeeded in this, and currently think it also needs to be addressed strategically during the very first stages of design.

# Closing words

Developing robust distributed systems is a difficult problem that is practically unsolved, or at least not solved to my satisfaction.

I’m sure its importance will increase significantly as latency between processors and everything else increases too. This results in an ever growing area of application for this type of application development.

As far as Level 4 goes, maybe I should ask Peter Van Roy. Over the years, I’ve read a lot of his papers, and they offered me a lot of insight in my own mistakes. The downside of insight is that you see others repeating your mistakes and most of the time, I fail to convince people they should do it differently.

Probably, this is because I cannot offer the panacea they want. They want RPC and they want it to work. It’s perverse … almost religious

http://www.yeolar.com/note/2013/10/19/the-game-of-distributed-systems-programming/