The Flow network is getting another significant upgrade that will enhance its BFT (Byzantine Fault Tolerance) capabilities and set the groundwork for a more efficient data exchange format, new streaming APIs to improve the developer experience, and a significant increase in transaction throughput in the future.
Let's dive deeper into each of these updates.
Improving resilience and performance for all non-consensus nodes
Flow has a multi-role architecture, and the consensus nodes are responsible for running the consensus algorithm among themselves and extending the chain by building new blocks.
However, all other non-consensus nodes – access, execution, collection, verification and even non-staked nodes such as observers need to keep track of the chain, or in other words – follow the chain as it advances to complete their responsibilities.
The Consensus Follower is the module which allows any non-consensus node to follow the chain by providing it with the latest block that has been finalized by the consensus nodes. The follower independently applies consensus rules to assert block validity and detects block finality. Essentially, it shields the higher-level processing logic within the node from any malicious nodes sending false data.
In this network upgrade, the Consensus Follower architecture has been revamped to make it far more BFT and efficient. The revamp includes
- Immediate validation of the incoming block header accepting only valid extensions of the chain for further processing. The new follower utilizes the Quorum Certificate (QC) contained in each header as cryptographic proof of consensus progress. Specifically, the QC proves that a super-majority of the consensus committee agreed on the validity of the chain up to the parent block.
- Optimized syncing of block ranges by only validating the QC of the block at the top of the range.
- Optimizing disk usage and attack resilience by never persisting potentially invalid blocks to the disk (all stored blocks are valid state extensions).
- Parallelizing processing of certified and uncertified blocks separately in the processing pipeline.
Consequently:
- Every node receives substantially improved BFT resilience in the core function of consensus, moving Flow closer to autonomous handling and responding to attack scenarios.
- This also provides the foundation for building light clients with tiny computational footprints, which can run on limited hardware, including smartphones.
You can read more about it here.
Adding building blocks for faster and more efficient Cadence data exchange
Today dApps, wallets and other clients that send transactions, execute scripts, and read existing transactions or events from the chain use the JSON-Cadence Data Interchange Format (JSON-CDC) codec for serializing and deserializing Cadence external values such as transaction, arguments, script arguments and return values from the chain such as a transaction event or the return value from a script. However, since JSON-CDC is based on JSON, it has several shortcomings. Firstly, JSON is fairly verbose, resulting in a large network payload size. Second, it also does not inherently provide a way for encoding values deterministically.
The Cadence Compact Format (CCF) is a new data format designed for compact, efficient, and deterministic encoding of Cadence external values. CCF uses CBOR (RFC 8949) as specified in the CCF specification. This upgrade introduces a fully implemented CCF codec in the codebase.
CCF addresses all the shortcomings in JSON-CDC. CCF-based messages can be fully self-describing or partially self-describing. Both are more compact than JSON-CDC messages. For example, a FeesDucted event such as:
When encoded in JSON-CDC (minified) uses 298 bytes but uses only 118 bytes in CCF fully self-describing mode and approximately 20 bytes in CCF partially self-describing mode.
Moreover, CCF-based protocols can send Cadence metadata just once for all messages of that same type.
The CCF codec is also faster and uses less memory than the JSON-CDC codec. It was comprehensively tested through a large number of test cases and fuzzing.
In the next phase, the CCF codec will be used for serializing and deserializing all Cadence external values (i.e. transaction arguments, script arguments, events, and script return values using the CCF format) thus making the exchange of Cadence data with the chain much more efficient.
Paving the way for newer and more ergonomic ways to consume chain data
In this network upgrade, the access node ships with the beta implementation of the event streaming API as proposed in the FLIP Event Streaming API. The event streaming API, when eventually turned on via a feature toggle, will provide a new and more ergonomic way for dApps to consume execution data (transaction events and account updates) via a push-based streaming model instead of continuously polling for changes.
Updates to eventually support parallel transaction execution
In the last mainnet upgrade, we had alluded to parallel transaction execution, which allows the execution node to execute multiple transactions simultaneously. This upgrade includes nearly ninety updates to the execution node to get it closer to fully supporting parallel transaction execution.
On track with the new Flow upgrade schedule
If you haven’t noticed, Flow is now on the new upgrade schedule announced previously in which there is only one scheduled spork every quarter that causes a network downtime, unlike last year in which there was a spork every two months. There is also now a height coordinated upgrade that happens once every month when there isn’t a spork scheduled, and it doesn’t result in more than seven minutes of network downtime – all while allowing new features and bug fixes to be pushed out more frequently to all nodes.
We hope you’re as excited as we are about this upgrade, as it will further accelerate mainstream Web3 adoption by making Flow more secure, performant, and developer-friendly.
Thank you to Alex Hentschel, Bastian Müller, Faye Amacker, Jordan Schalm, and Jan Bernatik for contributing to this content.