Case Study

BERACHAIN

BERACHAIN

Berachain partnered with Hypotenuse Labs to build and launch core DeFi infrastructure designed for scale, performance, and long term ecosystem growth.

LET'S CONNECT
LET'S CONNECT

Services

Custom Software Development
Security, Infrastructure, and Reliability
Dedicated Engineering Teams

Technical Focus

Smart Contract
Backend Systems
Distributed Systems
Observability & Monitoring
Protocol Development
Token Engineering
Validator Infrastructure

Intro

Berachain is a high-performance, EVM-compatible Layer 1 built on the Cosmos SDK, designed to align incentives across validators, developers, and users.

Summary

We redesigned the architecture into two services - a durable, queue-driven processor and a horizontally scalable public API - and validated it with layered testing and benchmarking. The resulting system was benchmarked to handle 18,000+ RPS and shipped with ~70% test coverage (excluding generated code), improving resilience and making future scaling and upgrades significantly easier.

The Challenge

When Berachain reached out, the system was a tightly-coupled monolith. It worked at low load, but scaling introduced data races, inconsistencies, and duplicated rewards, the kinds of failures that erode trust when users rely on rankings and claim state.

Our Solution + Process

Berachain’s original backend was a monolith with tightly coupled responsibilities:

  • ingestion and computation,

  • claim tracking,

  • public API serving,

  • and scaling logic.

As load increased, horizontal scaling triggered failures such as inconsistent state and duplicated reward records. Because validator rankings and reward visibility depended on this data, correctness was foundational to protocol trust.

On top of that, limited access to dev environments and monitoring increased debugging cost, so the system needed to be self-healing, observable, and safe-by-design.

Split the system: processor + public API

  • We separated concerns into two independent services:

    • Queue-based processor: ingests validator inputs and on-chain events, computes yields, and updates canonical state.

    • Public REST API: serves computed reward data to users at high throughput without risking overload of ingestion and computation.

  • This architecture prevents read traffic from interfering with compute workloads, and it makes scaling predictable: scale the API horizontally for reads, and scale the processor according to ingestion volume.

Resilience and correctness in the processor

  • We designed the processor to be crash-tolerant and replay-friendly:

    • durable job handling with retries,

    • safe recovery on restart (no silent data loss),

    • consistency protections to prevent duplicated rewards and partial writes.

Production readiness through testing + benchmarking

  • We validated the system at multiple layers:

    • unit and integration tests for core logic and persistence behavior,

    • manual testnet validation across frontend ↔ backend workflows,

    • custom benchmarking scripts run locally and in cluster-like environments.

  • Using Go’s tooling, we reached ~70% test coverage (excluding generated code), and we benchmarked both services under production-like load to confirm throughput and stability.

We also improved database driver and connection pooling strategies to avoid bottlenecks as concurrency increased.

The Results

Performance and scalability

  • Benchmarked to handle 18,000+ requests per second across the system’s public-facing surfaces

Predictable scaling via separation of compute and read paths

  • Durable queue + retries support recovery during crashes or transient failures

Delivery and trust

  • Delivered as a complete production-ready system with no follow-on implementation work required

Stable post-launch behavior with no regression issues observed during validation

  • Client engineers highlighted the split architecture as a major improvement for maintainability and future upgrades