dApp Backend Development with Rust

We design and develop full-cycle blockchain solutions: from smart contract architecture to launching DeFi protocols, NFT marketplaces and crypto exchanges. Security audits, tokenomics, integration with existing infrastructure.
Showing 1 of 1 servicesAll 1306 services
dApp Backend Development with Rust
Complex
~1-2 weeks
FAQ
Blockchain Development Services
Blockchain Development Stages
Latest works
  • image_website-b2b-advance_0.png
    B2B ADVANCE company website development
    1215
  • image_web-applications_feedme_466_0.webp
    Development of a web application for FEEDME
    1161
  • image_websites_belfingroup_462_0.webp
    Website development for BELFINGROUP
    852
  • image_ecommerce_furnoro_435_0.webp
    Development of an online store for the company FURNORO
    1043
  • image_logo-advance_0.png
    B2B Advance company logo design
    561
  • image_crm_enviok_479_0.webp
    Development of a web application for Enviok
    823

DApp Backend Development with Rust

Most dApp backends are written in Node.js — and that's fine up to a certain scale. But there's a class of tasks where Rust isn't just faster, but fundamentally changes what's possible: processing millions of blockchain events in real-time, MEV bots with latency < 1ms, cryptographic computations, parsing and indexing on-chain data. These are exactly those cases.

When Rust is Justified for DApp Backend

Not every dApp needs a Rust backend. Node.js + TypeScript covers 80% of cases. Rust is justified when:

  • Latency is critical: MEV, arbitrage bots, liquidations — milliseconds cost money
  • Throughput is high: indexing hundreds of thousands of blocks, processing event streams from multiple nodes
  • Cryptography: ZK-proof generation, signature verification in hot path
  • Memory safety without GC pauses: DeFi backend can't afford 50ms GC pauses during risk checks

Stack: alloy + axum

alloy — modern Rust library for Ethereum, replacement for outdated ethers-rs. Developed by the same team, significantly better API:

[dependencies]
alloy = { version = "0.3", features = ["full"] }
axum = "0.7"
tokio = { version = "1", features = ["full"] }
tower-http = { version = "0.5", features = ["cors", "trace"] }
serde = { version = "1", features = ["derive"] }
sqlx = { version = "0.7", features = ["postgres", "runtime-tokio-tls"] }
use alloy::{
    providers::{Provider, ProviderBuilder, WsConnect},
    primitives::{address, U256},
    sol,
};

// Generate types from ABI at compile time
sol!(
    #[allow(missing_docs)]
    #[sol(rpc)]
    ERC20,
    "abi/ERC20.json"
);

#[tokio::main]
async fn main() -> eyre::Result<()> {
    let ws = WsConnect::new("wss://eth-mainnet.g.alchemy.com/v2/KEY");
    let provider = ProviderBuilder::new().on_ws(ws).await?;
    
    let token = ERC20::new(address!("A0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48"), provider);
    let balance = token.balanceOf(address!("...")).call().await?;
    
    Ok(())
}

Key advantage of the sol! macro — ABI encoding/decoding happens at compile time, no runtime overhead, full type safety.

Event Indexer: Subscription and Processing

The most frequent backend task — listen to contract events and update the database. In Rust this is cleaner than in any other language:

use alloy::rpc::types::Filter;
use futures_util::StreamExt;

async fn index_transfers(
    provider: Arc<impl Provider>,
    db: Arc<PgPool>,
    contract: Address,
    from_block: u64,
) -> eyre::Result<()> {
    let filter = Filter::new()
        .address(contract)
        .event("Transfer(address,address,uint256)")
        .from_block(from_block);
    
    let mut stream = provider.subscribe_logs(&filter).await?;
    
    while let Some(log) = stream.next().await {
        let transfer = ERC20::Transfer::decode_log(&log, true)?;
        
        sqlx::query!(
            "INSERT INTO transfers (tx_hash, from_addr, to_addr, amount, block_number)
             VALUES ($1, $2, $3, $4, $5)
             ON CONFLICT (tx_hash) DO NOTHING",
            log.transaction_hash.map(|h| h.to_string()),
            transfer.from.to_string(),
            transfer.to.to_string(),
            transfer.value.to_string(), // U256 -> String for PostgreSQL numeric
            log.block_number.map(|n| n as i64),
        )
        .execute(&*db)
        .await?;
    }
    
    Ok(())
}

Backfill historical data — for indexing past blocks use get_logs with block ranges. Optimal chunk size — 2000 blocks (limit of most nodes). Parallelize via tokio::spawn with a semaphore for concurrency control:

use tokio::sync::Semaphore;

let semaphore = Arc::new(Semaphore::new(10)); // 10 parallel requests

let tasks: Vec<_> = block_ranges.iter().map(|(from, to)| {
    let permit = semaphore.clone().acquire_owned();
    let provider = provider.clone();
    
    tokio::spawn(async move {
        let _permit = permit.await.unwrap();
        fetch_and_index_range(provider, *from, *to).await
    })
}).collect();

futures::future::join_all(tasks).await;

HTTP API with axum

use axum::{Router, routing::get, extract::{State, Path}, Json};

#[derive(Clone)]
struct AppState {
    db: PgPool,
    provider: Arc<dyn Provider>,
}

async fn get_token_balance(
    State(state): State<AppState>,
    Path((address, token)): Path<(String, String)>,
) -> Result<Json<BalanceResponse>, AppError> {
    let addr: Address = address.parse()?;
    let token_addr: Address = token.parse()?;
    
    let contract = ERC20::new(token_addr, state.provider.clone());
    let balance = contract.balanceOf(addr).call().await?;
    
    Ok(Json(BalanceResponse {
        address,
        balance: balance.to_string(),
        decimals: 18,
    }))
}

let app = Router::new()
    .route("/balance/:address/:token", get(get_token_balance))
    .with_state(state)
    .layer(CorsLayer::permissive())
    .layer(TraceLayer::new_for_http());

Working with Node: Resilience and Failover

Production backend can't depend on a single node. Implement retry logic and fallback:

use alloy::providers::fillers::{FillProvider, RecommendedFillers};

// Multiple RPC providers with priorities
let providers = vec![
    "wss://eth-mainnet.g.alchemy.com/v2/KEY1",
    "wss://mainnet.infura.io/ws/v3/KEY2",
];

// If one fails - automatically switch
// Implement via tower::retry middleware

For high-load scenarios we recommend a personal Ethereum node (Erigon for archival data, Reth for speed). Erigon syncs faster than Geth and consumes significantly less disk space.

Cryptographic Operations

Rust + arkworks or halo2 for ZK components. Example: verify Groth16 proof on backend before sending transaction:

use ark_groth16::{Groth16, Proof, VerifyingKey};
use ark_bn254::Bn254;

fn verify_proof(
    vk: &VerifyingKey<Bn254>,
    proof: &Proof<Bn254>,
    public_inputs: &[Fr],
) -> bool {
    Groth16::<Bn254>::verify(vk, public_inputs, proof)
        .expect("Verification failed")
}

On Rust this works orders of magnitude faster than snarkjs in Node.js.

MEV and Latency Optimization

For MEV bots every microsecond matters:

  • Use raw TCP connections to Ethereum nodes instead of HTTP (less overhead)
  • jemalloc instead of system allocator to reduce latency
  • CPU pinning via tokio::runtime::Builder::new_current_thread() for critical paths
  • Flamegraph profiling via cargo flamegraph before optimizing
#[global_allocator]
static ALLOC: tikv_jemallocator::Jemalloc = tikv_jemallocator::Jemalloc;

Deployment

Rust binary — statically linked executable with no dependencies. Docker image is 20-50MB vs 200MB+ for Node.js.

FROM rust:1.75 as builder
WORKDIR /app
COPY . .
RUN cargo build --release

FROM debian:bookworm-slim
COPY --from=builder /app/target/release/dapp-backend /usr/local/bin/
CMD ["dapp-backend"]

For production use distroless images (gcr.io/distroless/cc) — minimal attack surface.