Collective decisions

From IridiaWiki
Jump to navigationJump to search

Goal

  • General goal: Reach consensus in a reasonable amount of time if more than 50 % of the robots are honest (secure decentralized decision making)
  • Task: What is the most frequent color in an area?
  • Motivational example: What if there are 100 robots, two colors (red: 52 %, green 48 %) but five robots always send green?
  • Method: Use the existing approach with some variations and add a security layer on top of it
  • Scalability: Robots can be added to the swarm at any time (to increase accuracy, size of the explored area, ...)
  • Trustlessness: The swarm members are untrusted (robots don't have to authenticate themselves, you don't have to know if a robot sends right or wrong information, you don't have to identify a robot, you don't know if the peers further propagate your messages, you don't have to know which organization or person a robot belongs to, ...)

Challenges

  • Noise (of sensors, actuators, communication channel)
  • Asynchronicity (some robots might be in the dissemination state, some might be in the sense state)
  • Absence of global information/trusted third parties -> no one collects the votes, they get propagated in a peer-to-peer manner
  • Malicious intruders/failing sensors (Byzantine robots)

Approaches

Classical non-secure approach

  • Create initial robot swarm
  • Each robot has an initial opinion (one of the colors)
  • Robots move arbitrarily in the area
  • Robots have two states: (1) sense color, (2) disseminate/receive color opinion (robots change preference based on majority voting)
  • Robots only sense the relative frequency of their current color preference and ignore all other colors
  • Dissemination time is a function of the relative frequency of that color (change from sense state to disseminate state is based on time)

Risks

  • Robots can disseminate wrong color preference
  • Robots can stop disseminating at all
  • Robots can only disseminate and never sense
  • New members are just included without any security check

=> In the case of (many) Byzantine robots, they will never reach consensus, reach consensus only very slowly or reach consensus but not on the right color

Blockchain-based approach

  • Create initial robot swarm (e.g., 25 robots) and provide each robot with 10 ETH in the genesis block.

The goal is again to determine the most frequent color. However, now two types of robots are explicitly considered:

  • Honest (working) robots that work according to a protocol similar to the one above:
    • Sense: same as above
    • Disseminate: mine and send own color opinion to peers:
      • Sign transaction "Vote for color x", add 1 ETH (as proof that you are allowed to vote but also as "bet" with the hope to receive back more) and distribute it to the neighbors
      • Dissemination time is based on relative frequency of that color in the sensor readings. The longer the opinion gets disseminated, the higher the chance that it will be included in the next block, since it will reach more miners (instead of increasing the dissemination time, another possibility would be to distribute a confidence value together with the opinion)
      • Robots further propagate the voting transactions across their peers and receive other opinions
      • Additionally, they perform the proof-of-work and try to be the first to solve the mining puzzle
      • Robots have an incentive to move around as much as possible to distribute their transactions and their mined blocks, get it included in the strongest chain, and receive the mining reward (this is something quite specific to swarm robots)--otherwise, they cannot be a part of the swarm anymore after some time because they will have 0 ETH.
  • Byzantine robots: can do anything (see risks above)

In each block, the blockchain will collect votes using a smart contract. Robots that voted for the majority opinion will get a reward (distributed from the 1 ETH that everyone had to send for making a transaction). Robots that never correctly voted for the majority opinion will soon not be able to vote anymore (however, they can prove their usefulness again by mining new ETH). The voting strategy can be implemented in the robots but smart contracts can put constraints on it (e.g., determine when robots can change their opinion or how often a robot is allowed to vote). During the experiment, the blockchain will possibly consist of many forks, since information often will not be spread out in the entire network but only in some local robot clusters. However, these forks can be solved once the clusters are reunited and the strongest chain (e.g., the one with the most votes and the most hash power) will be considered as the truth.

Now, let's say a new robot wants to join the swarm, for example, because the experimenter wants to increase the accuracy of the experiment. However, the swarm is autonomous and cannot trust the new member nor the experimenter (in a real-world setting, the swarm might work in a remote area and it might be possible that an attacker sends malicious robots). Initially, the new robot has 0 ETH in this blockchain and cannot vote. However, it can prove its usefulness by mining (or use one of the physical proof-of-work methods mentioned below). Therefore, it can be in the dissemination state, randomly explore the area, collect transactions from the other robots, perform the proof-of-work, and hope that it wins the mining puzzle (the mining reward could also be an interesting parameter: if it consists only of transaction fees, malicious robots have to include many (honest) transactions to actually be able to vote, and, therefore, support the network and the consensus process). After getting the mining reward, the robots can distribute the created block among the network and if it's included in the longest chain, they will have some ETH to take part in the voting process.

This setup also makes it hard for malicious intruders to join the swarm. As long as malicious robots are in a minority, it could be that they do not influence the consensus process to a large extent. Additionally, there could be possibilities to determine if votes are Byzantine and not according to the sensor inputs of a robot at all (e.g., by comparing the votes to robots in the proximity, by using some form of outlier removal or Bayesian approach, or by sending proofs of the sensor inputs to the blockchain: for example, a picture of the floor).

The influence of Byzantine nodes in a blockchain-sense is also limited (even if they have more than 50 % processing power):

  • They can never make up transactions using someone else's address (due to the private/public key + signature); therefore, they cannot directly steal tokens/reputation (they can only steal tokens by reversing transactions)

However, Byzantine nodes can (if they have more than 50 % processing power):

  • Reverse transactions from not too long ago (and, therefore, double spend transactions: vote on red and green)
  • Prevent transactions to be included into the blockchain

To perform an attack, the dishonest nodes have to collaborate to gain more than 50% of the mining power (proof-of-work) or of the tokens (proof-of-stake). The amount of ETH of a robot could be also used as reputation to weight votes (e.g., color x confidence x reputation).

Proof-of-Stake Outline

As we already discussed, it could be that an attacker can easily perform a 51 % attack since robots are computationally rather limited. Therefore, an alternative could be to use Proof-of-Stake. I don't know enough about Proof-of-Stake yet to make this approach watertight but a rough outline could look as follows (see also https://blog.ethereum.org/2015/01/10/light-clients-proof-stake/):

  1. To become a potential blockmaker/signer; robots have to put down a security deposit (recorded in a smart contract); to receive enough ETH for the security deposit in the beginning, they have to do something useful first: for example, some physical proof-of-work (provide other members with electricity to charge their batteries, collect commodities, take images of an area, perform classical hash-based proof-of-work, ...)
  2. For each block (block time is based on fixed timing and not determined by the difficulty of a mining puzzle): a function generates a random number R (in a deterministic manner based on information in the blockchain); based on this number, the blockmaker and signers are selected (e.g., 1 blockmaker + 15 signers) relative to their deposit amount
  3. Blockmaker and signers potentially sign the block
  4. Block header consists of:
    1. hash of the previous block
    2. list of signatures from the blockmaker and the signers
    3. Merkle root
  5. A block is valid if it is signed by the blockmaker and m out of n signers (e.g., 3 out of 5). The problem here could be that the potential signers are not reachable since they are too far away but that could be part of the analysis.
  6. Blockmaker and signers get a small signing reward
  7. If blockmakers or signers try to sign blocks in multiple forks of the blockchain they get explicitly punished and lose their security deposit (and can, therefore, not be part of the swarm anymore until they perform new physical proof-of-work); the robot that provides evidence that a signer tried to sign blocks in multiple forks gets a portion of the security deposit as reward
  8. The blockchain with a combination of the most signatures/votes is considered the true blockchain

Experiments

  • Compare performance (e.g., time to reach consensus) and resistance to attacks (e.g., maximum number of tolerated Byzantine robots) between the classical setup and the blockchain-based variant
  • Compare different settings in the blockchain-based variant (e.g., block time, design of light clients, ...) and their influence on the performance