Automatic Airdrop Claim System Development

We design and develop full-cycle blockchain solutions: from smart contract architecture to launching DeFi protocols, NFT marketplaces and crypto exchanges. Security audits, tokenomics, integration with existing infrastructure.
Showing 1 of 1 servicesAll 1306 services
Automatic Airdrop Claim System Development
Medium
~3-5 business days
FAQ
Blockchain Development Services
Blockchain Development Stages
Latest works
  • image_website-b2b-advance_0.png
    B2B ADVANCE company website development
    1215
  • image_web-applications_feedme_466_0.webp
    Development of a web application for FEEDME
    1161
  • image_websites_belfingroup_462_0.webp
    Website development for BELFINGROUP
    852
  • image_ecommerce_furnoro_435_0.webp
    Development of an online store for the company FURNORO
    1043
  • image_logo-advance_0.png
    B2B Advance company logo design
    561
  • image_crm_enviok_479_0.webp
    Development of a web application for Enviok
    823

Retroactive Airdrop System Development

Retroactive airdrop — one of the most effective mechanisms for distributing tokens among real protocol users. The idea is simple: reward those who used the product before token appeared. Uniswap distributed UNI in September 2020 — everyone who swapped got 400 UNI. dYdX, Optimism, Arbitrum, ENS — all built similar mechanics, but with varying complexity of eligibility criteria.

Implementation looks simple only at first glance. Behind it stands non-trivial infrastructure: on-chain data collection and processing, snapshot systems, Sybil-activity checking, Merkle tree distribution, and claim UI. Each of these stages can become a bottleneck or vulnerability point.

Data Collection and Snapshot

Data Sources

Foundation of retroactive airdrop — historical on-chain data. Need to answer: who did what, when, and in what volume? Several approaches used.

The Graph — contract event indexing via subgraph. If protocol already has subgraph, historical events (Swap, Deposit, Borrow) accessible via GraphQL. Problem: The Graph stores data from subgraph deployment moment, not from blockchain beginning. If subgraph appeared after protocol — early events may be unavailable.

Direct RPC indexing — own script that iterates blocks via eth_getLogs and collects events. Slow (Ethereum ~19M blocks at time of writing), requires archive node or paid RPC (Alchemy, Infura with archive access). But full data control.

Dune Analytics — SQL queries to indexed Ethereum data. Fast for exploration, but CSV export limited for large datasets. Good for prototyping criteria.

For production airdrop of several thousand addresses — recommend own indexer. Stack: TypeScript + viem + PostgreSQL. Pull events in batches of 2000 blocks, store in DB, build aggregations.

Eligibility Criteria

Criteria determine who and how much gets. Typical approaches:

Action Weight Example
Transaction count Base ≥3 transactions = eligible
Trading volume Linear/logarithmic $1000 volume = 1 point
Activity timing Time windows Used in 3+ different months
Early user Bonus Before block X = 2x
Liquidity provision Volume × time TVL × days

Logarithmic volume scaling prevents whale dominance: score = log10(volume_usd + 1). Without it top-10 addresses get half of airdrop.

Anti-Sybil Protection

Sybil Attack Problem

Sybil — creating multiple addresses to get larger token amount. If criterion is "1 transaction = eligible", attacker creates 1000 wallets and makes one transaction from each. This is real problem: Optimism airdrop ~17% of initially eligible addresses were filtered as Sybil.

Detection Methods

Address funding. All Sybil-cluster addresses get ETH for gas from one source. Funding graph built as tree: root — CEX withdrawal or known wallet, leaves — Sybil addresses. If N addresses received ETH from one address and all N did similar actions at similar time — this is cluster.

Temporal patterns. Sybil script creates transactions in specific time window (while script runs). Cluster of 50 addresses, all made first transaction within 10 minutes — suspicious.

On-chain identity. ENS name, Lens Protocol profile, Gitcoin Passport score — indicators of real user. Address with ENS name almost never Sybil.

Minimum ETH balance. Zero wallets after airdrop claim — Sybil sign. Add minimum balance requirement (0.001 ETH) at snapshot time filters most fake addresses.

Merkle Tree Distribution

Why Merkle Tree

Naive approach — store eligible address list and amounts on-chain. At 100,000 addresses this is ~3MB on-chain data, deployment cost will be astronomical. Merkle Tree solves this: tree root (32 bytes) stored in contract, user provides proof at claim.

Proof — array of hashes that together with address and amount allow recreating tree root. For N leaves proof size — O(log N). At 1M users proof contains ~20 elements of 32 bytes = 640 bytes calldata.

Implementation

Merkle tree built off-chain from list of pairs [address, amount]. Standard — OpenZeppelin MerkleProof library with keccak256 hashing:

// leaf = keccak256(abi.encodePacked(account, amount))
// double hashing prevents second preimage attack
bytes32 leaf = keccak256(bytes.concat(keccak256(abi.encode(account, amount))));
require(MerkleProof.verify(proof, merkleRoot, leaf), "Invalid proof");

Claim contract stores claimed[address] => bool mapping to prevent re-claim. After claim — transfer tokens from contract to user.

Claim time window. Retroactive airdrop should have deadline: 3-6 months sufficient. Unclaimed tokens returned to treasury or burned. This prevents forever-hanging allocations.

Frontend and UX

Minimal claim UI: connect wallet → check eligibility (request to off-chain API or compute proof on client) → display amount → claim transaction. Proof can be stored in IPFS (public JSON file) or generated on server.

Important: don't show amounts before public announcement. Eligibility check via API (without revealing amount) lets users check status without revealing full list before start.

Stack and Timeline

Smart contract (Solidity + Foundry) + off-chain indexer (TypeScript + PostgreSQL) + Merkle tree generator + claim UI (React + wagmi).

MVP for simple airdrop with ready data — 2-3 weeks. Full system with own indexer, Sybil-detection and claim UI — 5-8 weeks.