Sui has introduced Tidehunter, a purpose-built blockchain storage engine designed to replace RocksDB by reducing write amplification and delivering higher, moreSui has introduced Tidehunter, a purpose-built blockchain storage engine designed to replace RocksDB by reducing write amplification and delivering higher, more

Tidehunter: Sui’s Next-Generation Database Optimized For Low Latency And Reduced Write Amplification

6 min read
tidehunter:%20Sui%E2%80%99s%20Next-Generation%20Database%20Optimized%20For%20Low%20Latency%20And%20Reduced%20Write%20Amplification

Sui, a Layer 1 blockchain network, has introduced Tidehunter, a new storage engine engineered to align with the performance demands, data access characteristics, and operational constraints commonly found in contemporary blockchain infrastructures. 

The system is positioned as a potential successor to the existing database layer used by both validators and full nodes, reflecting a broader effort to modernize core infrastructure in response to the evolving scale and workload profiles of production blockchain environments.

Sui originally relied on RocksDB as its primary key–value storage layer, a widely adopted and mature solution that enabled rapid protocol development. As the platform expanded and operational demands increased, fundamental limitations of general-purpose LSM-tree databases became increasingly apparent in production-like environments. 

Extensive tuning and deep internal expertise could not fully address structural inefficiencies that conflicted with the access patterns typical of blockchain systems. This led to a strategic shift toward designing a storage engine optimized specifically for blockchain workloads, resulting in the development of Tidehunter.

A central factor behind this decision was persistent write amplification. Measurements under realistic Sui workloads showed amplification levels of roughly ten to twelve times, meaning that relatively small volumes of application data generated disproportionately large amounts of disk traffic. While such behavior is common in LSM-based systems, it reduces effective storage bandwidth and intensifies contention between background compaction and read operations. In write-intensive or balanced read-write environments, this overhead becomes increasingly restrictive as throughput scales. 

Load testing on high-performance clusters confirmed the impact, with disk utilization nearing saturation despite moderate application write rates, highlighting the growing mismatch between conventional storage architectures and modern blockchain performance requirements.

Tidehunter Architecture: A Storage Engine Optimized For Blockchain Access Patterns And Sustained High-Throughput Workloads

Storage behavior in Sui and comparable blockchain platforms is dominated by a small set of recurring data access patterns, and Tidehunter is architected specifically around these characteristics. A large portion of state is addressed using cryptographic hash keys that are evenly distributed and typically map to relatively large records, which removes locality but simplifies consistency and correctness. 

At the same time, blockchains rely heavily on append-oriented structures, such as consensus logs and checkpoints, where data is written in order and later retrieved using monotonically increasing identifiers. These environments are also inherently write-heavy, while still requiring fast access on latency-critical read paths, making excessive write amplification a direct threat to both throughput and responsiveness.

At the center of Tidehunter is a high-concurrency write pipeline built to exploit the parallel capabilities of modern solid-state storage. Incoming writes are funneled through a lock-free write-ahead log capable of sustaining extremely high operation rates, with contention limited to a minimal allocation step. 

Data copying proceeds in parallel, and the system avoids per-operation system calls by using writable memory-mapped files, while durability is handled asynchronously by background services. This design produces a predictable and highly parallel write path that can saturate disk bandwidth without becoming constrained by CPU overhead.

Reducing write amplification is treated as a primary architectural objective rather than an optimization step. Instead of using the log as a temporary staging area, Tidehunter stores data permanently in log segments and builds indexes that reference offsets directly, eliminating repeated rewrites of values. 

Indexes are heavily sharded to keep write amplification low and to increase parallelism, removing the need for traditional LSM-tree structures. For append-dominated datasets, such as checkpoints and consensus records, specialized sharding strategies keep recent data tightly grouped so that write overhead remains stable even as historical data grows.

For tables addressed by uniformly distributed hash keys, Tidehunter introduces a uniform lookup index optimized for predictable, low-latency access. Rather than issuing multiple small and random reads, the index reads a slightly larger contiguous region that statistically contains the desired entry, allowing most lookups to complete in a single disk round trip. 

This approach deliberately trades some read throughput for lower and more stable latency, a tradeoff that becomes practical because reduced write amplification frees substantial disk bandwidth for read traffic. The result is more consistent performance on latency-sensitive operations such as transaction execution and state validation.

To further control tail latency at scale, Tidehunter combines direct I/O with application-managed caching. Large historical reads bypass the operating system’s page cache to prevent cache pollution, while recent and frequently accessed data is retained in user-space caches informed by application-level access patterns. In combination with its indexing layout, this reduces unnecessary disk round trips and improves predictability under sustained load.

Data lifecycle management is also simplified. Because records are stored directly in log segments, removing obsolete historical data can be performed by deleting entire log files once they fall outside the retention window. This avoids the complex and I/O-intensive compaction mechanisms required by LSM-based databases and enables faster, more predictable pruning even as datasets expand.

Across workloads designed to reflect real Sui usage, Tidehunter demonstrates higher throughput and lower latency than RocksDB while consuming significantly less disk write bandwidth. The most visible improvement comes from the near elimination of write amplification, which allows disk activity to more closely match application-level writes and preserves I/O capacity for reads. These effects are observed both in controlled benchmarks and in full validator deployments, indicating that the gains extend beyond synthetic testing.

Evaluation is performed using a database-agnostic benchmark framework that models realistic mixes of inserts, deletions, point lookups, and iteration workloads. Tests are parameterized to reflect Sui-like key distributions, value sizes, and read-write ratios, and are executed on hardware aligned with recommended validator specifications. Under these conditions, Tidehunter consistently sustains higher throughput and lower latency than RocksDB, with the largest advantages appearing in write-heavy and balanced scenarios.

Validator-level benchmarks further confirm the results. When integrated directly into Sui and subjected to sustained transaction load, systems using Tidehunter maintain stable throughput and lower latency at operating points where RocksDB-backed deployments begin to suffer from rising disk utilization and performance degradation. Measurements show reduced disk pressure, steadier CPU usage, and improved finality latency, highlighting a clear divergence in behavior under comparable load.

Tidehunter represents a practical response to the operational demands of long-running, high-throughput blockchain systems. As blockchains move toward sustained rather than burst-driven workloads, storage efficiency becomes a foundational requirement for protocol performance. The design of Tidehunter reflects a shift toward infrastructure built explicitly for that next stage of scale, with further technical detail and deployment plans expected to follow.

The post Tidehunter: Sui’s Next-Generation Database Optimized For Low Latency And Reduced Write Amplification appeared first on Metaverse Post.

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

The Next “Big Story” in Crypto: Crypto Credit and Borrowing, Says Bitwise CEO

The Next “Big Story” in Crypto: Crypto Credit and Borrowing, Says Bitwise CEO

Bitwise CEO has recently predicted a major growth for the crypto borrowing and credit sector, calling it the next “big story.” The post The Next “Big Story” in Crypto: Crypto Credit and Borrowing, Says Bitwise CEO appeared first on Coinspeaker.
Share
Coinspeaker2025/09/18 22:16
SEC New Standards to Simplify Crypto ETF Listings

SEC New Standards to Simplify Crypto ETF Listings

The post SEC New Standards to Simplify Crypto ETF Listings appeared on BitcoinEthereumNews.com. The United States Securities and Exchange Commission (SEC) approved a new standard for crypto ETF listings on Wednesday. The standard is created to simplify the working of exchanges in terms of the process followed for crypto ETP listings. This makes it possible to to avoid the cumbersome route of case-by-case approval being followed so far. With this change, exchanges can bypass the 19(b) rule filing process. It is a review that can stretch up to 240 days and demands direct SEC approval before an ETF can launch. Instead of going through the tedious and lengthy review process, the SEC has set up a system that allows exchanges to act more quickly. Now, when an ETF issuer presents a product idea to exchanges like Nasdaq, NYSE, or CBOE, the exchange can move ahead as long as the proposal meets the generic listing standard. This means that strategies based on a single token or a basket of tokens can be listed without waiting for individual approval. New Standards Will Ease Crypto ETF Listings: SEC Chairman According to the Chairman of the SEC, Paul Atkins, this move is aimed at making it easier for investors to access digital asset products through regulated U.S. markets. He noted that by approving generic listing standards, the agency is helping U.S. capital markets remain a global leader in digital asset innovation. At the same time, the SEC approved the Grayscale Digital Large Cap Fund, a fund made up of Bitcoin, Ethereum, XRP, Cardano and Solana. Furthermore, the SEC also approved a new type of options linked to the Cboe Bitcoin U.S. ETF Index and its mini version. This step further expands the range of crypto-linked derivatives available in regulated U.S. markets. How Will SEC General Listing Standard Impact Altcoin Crypto ETF Market? The SEC’s updated listing standards could clear…
Share
BitcoinEthereumNews2025/09/18 21:38
Victra Named 2025 Recipient of Verizon’s Best Build Compliance Award

Victra Named 2025 Recipient of Verizon’s Best Build Compliance Award

Verizon Recognizes Victra for Industry-Leading Excellence in Store Design and Brand Compliance. RALEIGH, N.C., Feb. 3, 2026 /PRNewswire/ — Verizon has named Victra
Share
AI Journal2026/02/03 20:49