In light of continued innovation in money and payments, many central banks are exploring the creation of a central bank digital currency (CBDC), a new form of central bank money which supplements existing central bank reserve account balances and physical currency 5. CBDCs could exist in various forms depending on a central bank's objectives, including a general-purpose CBDC that can be made available to the public for retail, e-commerce, and person to person payments. Central banks, researchers, and policymakers have proposed various objectives including fostering financial inclusion, improving efficiency in payments, prompting innovation in financial services, maintaining financial stability, and promoting privacy 2 3 9 19. Because the CBDC research process is still in early stages in many jurisdictions, several technical design questions remain open for investigation. The answers to these questions will have meaningful implications and consequences for what options are, or are not, available to policymakers. The Federal Reserve Bank of Boston (Boston Fed) and the Massachusetts Institute of Technology's Digital Currency Initiative (MIT DCI) are collaborating on exploratory research known as Project Hamilton, a multiyear research project to explore the CBDC design space and gain a hands-on understanding of a CBDC's technical challenges and opportunities. This paper presents the project's Phase 1 research. Our primary goal was to design a core transaction processor that meets the robust speed, throughput, and fault tolerance requirements of a large retail payment system. Our secondary goal was to create a flexible platform for collaboration, data gathering, comparison with multiple architectures, and other future research. With this intent, we are releasing all software from our research publicly under the MIT open source license. By focusing Phase 1 on the feasibility and performance of basic, but resilient transactions, we aim to create a foundation for more complex functionality in Phase 2. The processor's baseline requirements include time to finality of less than five seconds, throughput of greater than 100,000 transactions per second, and wide-scale geographic fault tolerance. Topics left to Phase 2 include critical questions around high-security issuance, systemwide auditability, programmability, how to balance privacy with compliance, technical roles for intermediaries, and resilience to denial of service attacks. As exploratory research on the implications of different design choices, this work is not intended for a pilot or public launch. That said, we consider performance under a variety of extensive, realistic workloads and fault tolerance requirements.
1. Core Design and Results In Phase 1, we created a design for a modular, extensible transaction processing system, implemented it in two distinct architectures, and evaluated their speed, throughput, and fault tolerance. Furthermore, our design can support a variety of models for intermediaries and data storage, including users custodying their own funds and not requiring storing personally identifying user data in the core of the transaction processor. In our design users interact with a central transaction processor using digital wallets storing cryptographic keys. Funds are addressed to public keys and wallets create cryptographic signatures to authorize payments. The transaction processor, run by a trusted operator (such as the central bank), stores cryptographic hashes representing unspent central bank funds. Each hash commits to a public key and value. Wallets issue signed transactions which destroy the funds being spent and create an equivalent amount of new funds owned by the receiver. The transaction processor validates transactions and atomically and durably applies changes to the set of unspent funds. In this version of our work, there are no intermediaries, fees, or identities outside of public keys. However, our design supports adding these roles and other features in the future. The flexibility, performance, and resiliency challenges of this design are addressed with three key ideas. The first idea is to decouple transaction validation from execution, which enables us to use a data structure that stores very little data in the core transaction processor. It also makes it easier to scale parts of the system independently. The second idea is a transaction format and protocol that is secure and provides flexibility for potential functionality like self-custody and future programmability. The third idea is a system design and commit protocol that efficiently executes these transactions, which we implemented with two architectures. Both architectures met and exceeded our speed and throughput requirements. The first architecture processes transactions through an ordering server which organizes fully validated transactions into batches, or blocks, and materializes an ordered transaction history. This architecture durably completed over 99% of transactions in under two seconds, and the majority of transactions in under 0.7 seconds. However, the ordering server resulted in a bottleneck which led to peak throughput of approximately 170,000 transactions per second. Our second architecture processes transactions in parallel on multiple computers and does not rely on a single ordering server to prevent double spends. This results in superior scalability but does not materialize an ordered history for all transactions. This second architecture demonstrated throughput of 1.7 million transactions per second with 99% of transactions durably completing in under a second, and the majority of transactions completing in under half a second. It also appears to scale linearly with the addition of more servers. In order to provide resilience, each architecture can tolerate the loss of two datacenter locations (for example, due to natural disasters or loss of network connectivity) while seamlessly continuing to process transactions and without losing any data.