Chances of adversaries taking control of the shards


#1

Posting here the discussion that we had with Alex and MarX.

Since our network is sharded it is possible by a sheer chance for the adversaries to have a significant voting power in some shard. Therefore, our system is designed to be robust in the situations when the adversaries have >1/3 of the voting power in some shard. However, when the adversaries have >2/3 of the voting power in some shard they can compromise the system. However, if we assume that the adversaries have less than <1/3 of the voting power across all shards then the chance of the adversaries having >2/3 of the voting power in some shard is negligibly small, as shown in the informal spec of TxFlow.
We, however, would also prefer to avoid the case when the adversaries that have <1/3 of the overall voting power gain >1/3 voting power in a significant number of shards.
The following is a small script that can be run to plot the distribution of the number of shards that would be controlled by the adversaries if they have a significant voting power: https://gist.github.com/nearmax/86cb984392749a08c0c2876d944407ec

For instance for the overall number of voting seats = 10K, number of shards = 100, and the overall voting power of an adversary equal to 25%, such adversary would on average have control (>1/3 of seats) of 2-3 shards, out of 100. The following is the plotted distribution:14


#2

Quoting Alex (from discord)

Also don’t forget that for most attacks (forking in particular) the adversarial behavior leads to stakes slashed. So either the adversary that controls 25% of the network doesn’t do anything wrong, or they rapidly lose their 25% control due to slashing

If an adversary controls more than 1/3 of the seats in a shard they can run double spends, and most likely get detected soon enough, so damage is reverted and their stake is slashed.

One way to avoid “big slashing” for an adversary is controlling P% of the stake through different identities that have each very little of this stake (as little as the threshold [from thresholded proof of stake] demanded to). I’m assuming there is no way for an external observer/verifier to adjudicate all these multiple accounts to a single entity.

When the adversary (controlling “perfectly” all his accounts) has more than 1/3 of verifiers on a single shard, perform a “big” double spend on this shard. After some time, this is detected, reverted and adversary stake is slashed. But… how much of his stake gets slashed? As I see, only the stake that is associated with accounts on the crime shard can be blaimed, so penalization can be small compared to reward from double spend.


#3

AFAIU you are addressing two independent concerns: 1) adversaries can split their stake between multiple entities; 2) The fork might involve a transaction that is larger than the potentially slashed stake.

  1. The slasher does not need to formally prove that nodes that caused the fork were coming from the same entity. 1/3+ of the stake in the shard that caused the fork will be slashed anyway and it does not really matter whether this 1/3+ stake came from one entity or multiple entities.

  2. There are several solutions. The clients who send and receive transactions are aware of the total stake in the shard. So if you a receiver who is expected to provide a service in exchange for some financial transaction and you might suspect that the sender might be malicious, you have two solutions: (remember, that 1/3+ of the stake in a single shard is still a large amount of money). Either receiver provides the service only after a sufficient amount of time passed. Or receiver asks the transaction to be split in time so that each piece is less than 1/3+ of the stake.


#4

I was addressing mainly point 2 in your description. Both of the solutions proposed affects fast finality for receivers since they have to wait (a large time when thinking on certain Dapps) in both cases before accepting the transaction.

If the transaction value is orders of magnitude less than the coins at stake, the loss is too big for an adversary to perform a misbehaviour, but when the gap is short a rational adversary might have external factors in considerations to risk into a double spend.

Receivers can “safely” accept “small” transactions as soon as they see it on the graph, but as you said they need to protect in the case of “large” transactions.