Models
Last updated
Last updated
Protocols for consensus and for implementing replicated state machines also typically make assumptions about the communication model, which characterizes the ability of an adversary to delay the delivery of messages between replicas. At opposite ends of the spectrum, we have the following models:
• In the synchronous model, there exists some known finite time bound δ, such that for any message sent, it will be delivered in less than time δ.
• In the asynchronous model, for any message sent, the adversary can delay its delivery by any finite amount of time, so that there is no bound on the time to deliver a message. Since the replicas in an GPTB-subnet are typically distributed around the globe, the synchronous communication model would be highly unrealistic. Indeed, an attacker could 4 compromise the correct behavior of the protocol by delaying honest replicas or the communication between them. Such an attack is generally easier to mount than gaining control over and corrupting an honest replica. In the setting of a globally distributed subnet, the most realistic and robust model is the asynchronous model. Unfortunately, there are no known consensus protocols in this model that are truly practical (more recent asynchronous consensus protocols, as in [MXC+16], attain reasonable throughput, but not very good latency). So like most other practical Byzantine fault tolerant systems that do not rely on synchronous communication (e.g., [CL99, BKM18, YMR+18]), the GPTB opts for a compromise: a partial synchrony communication model [DLS88]. Such partial synchrony models can be formulated in various ways. The partial synchrony assumption used by the GPTB says, roughly speaking, that for each subnet, communication among replicas in that subnet is periodically synchronous for short intervals of time; moreover, the synchrony bound δ does not need to be known in advance. This partial synchrony assumption is only needed to ensure that the consensus protocol makes progress (the so-called liveness property). The partial synchrony assumption is not needed to ensure correct behavior of consensus (the so-called safety property), nor is it needed anywhere else in the GPTB protocol stack. Under the assumption of partial synchrony and Byzantine faults, it is known that our bound of f < n/3 on the number of faults is optimal
A particular model of computation. Such a machine maintains a state, which corresponds to main memory or other forms of data storage in an ordinary computer. Such a machine executes in discrete rounds: in each round, it takes an input, applies a state transition function to the input and the current state, obtaining an output and a new state. The new state becomes the current state in the next round. The state transition function of the GPTB is a universal function, meaning that some of the inputs and data stored in the state may be arbitrary programs which act on other inputs and data. Thus, such a state machine represents a general (i.e., Turing complete) model of computation. 3 To achieve fault tolerance, the state machine may be replicated. A replicated state machine comprises a subnet of replicas, each of which is running a copy of the same state machine. A subnet should continue to function — and to function correctly — even if some replicas are faulty. It is essential that each replica in a subnet processes the same inputs in the same order. To achieve this, the replicas in a subnet must run a consensus protocol [Fis83], which ensures that all replicas in a subnet process inputs in the same order. Therefore, the internal state of each replica will evolve over time in exactly the same way, and each replica will produce exactly the same sequence of outputs. Note that an input to a replicated state machine on the GPTB may be an input generated by an external user, or an output generated by another replicated state machine. Similarly, an output of a replicated state machine may be either an output directed to an external user, or an input to another replicated state machine.
An autonomous research agent (ARA) is an artificial intelligence (AI) system that is designed to perform independent research and analysis on a particular subject or set of data. ARAs are capable of working autonomously, meaning they can learn and adapt to new information without human intervention. ARAs are often used in fields such as finance, healthcare, and scientific research to help process large amounts of data and generate insights that humans might not be able to find on their own. For example, an ARA could be used to analyze financial market data and make predictions about future market trends, or to scan medical research papers and identify potential treatments for a particular disease. ARAs typically use a combination of machine learning, natural language processing, and other AI technologies to analyze and understand data. They can also learn from feedback and adjust their algorithms accordingly, which allows them to improve their accuracy and effectiveness over time. Overall, ARAs have the potential to revolutionize the way we conduct research and analysis, by providing faster, more accurate, and more comprehensive insights than traditional human-based methods. However, there are also concerns around the ethics and accountability of autonomous systems, which will need to be addressed as the technology continues to evolve.