by Daniel Molina & Jordi Herrera-Joancomartí — November 2025

The challenge of decentralization security

When Ethereum transitioned from Proof of Work to Proof of Stake (PoS), it didn’t just replace miners with validators—it also redefined what “trust” means in a decentralized network. Validators are now responsible for securing the blockchain by staking 32 ETH and participating in block proposal and attestation.

While this model dramatically improves energy efficiency, it shifts part of the network’s security burden onto individuals. Each validator must keep their private keys safe while remaining online and responsive to consensus duties. The resulting question is fundamental: how much risk does a user assume when they decide to stake their own funds?

Among the many staking paradigms—solo, pooled, custodial, and non-custodial—the last one has grown in popularity because it promises the best of both worlds: users retain ownership of their funds while delegating the technical complexities of validator operation. Yet, as our research shows, this setup comes with subtle but serious risks.

Inside non-custodial staking

In non-custodial staking, users interact with a service that manages the validator keys on their behalf, while the funds remain locked in Ethereum’s official deposit contract under their own withdrawal address. This separation preserves ownership—but also creates a shared-trust model between user and service.

Two common variants exist:

  • Full key control, where the user generates and provides their own validator keys.
  • Delegated key control, where the staking service generates the keys and never discloses them.

The distinction is crucial. In the second case, even though users never “hand over” their funds, they still rely on the service not to misuse the validator keys. If those keys were leaked or used maliciously, the validator could be slashed—losing part of its stake as a penalty.

Modeling the risk: what happens when keys leak

Our paper, “A Risk Analysis of Non-custodial Staking in Ethereum,” presented at BCCA 2025, formalizes how validator private key exposure can translate into measurable economic loss. We modeled two main attack vectors:

  1. Proposer-based attacks, where a compromised validator proposes two conflicting blocks in the same slot, resulting in slashing.

To assess the practical feasibility of such proposer-based attacks, we simulated the expected waiting time until two specific conditions are met: a victim validator is selected to propose a block n, and one of the attacker’s own validators is selected as proposer for block n + 1. The results (Figure 1) show an exponential increase in waiting time as the number of controlled validators decreases.

Even assuming the attacker controls 14.600 victim validators—an amount comparable to the largest non-custodial providers—the attack would require, on average, 28.7 years before the right proposer combination occurs if the attacker only owns a single validator. Even with 1.000 attacker validators, the expected interval drops to around 11 days, still far from a practical or repeatable strategy.

Figure 1. Log-scale expected waiting time for a successful proposer-based slashing attack, depending on the number of attacker and victim validators. The plot shows that even with thousands of compromised validators, the probability of aligning proposer roles remains extremely low—making the attack economically and temporally infeasible.

  1. Attester-based attacks, where a validator signs conflicting attestations during the same epoch.

A similar analysis was performed for attester-based coordination attacks, where success requires that several compromised validators belong to the same attestation committee during a given epoch. Since each committee contains 128 validators chosen pseudorandomly among more than one million, the probability of concentrating even a small group of compromised nodes in one committee is extremely low.

Figure 2 illustrates this probability distribution as a function of the number of compromised validators (nv​) and the desired number of colluding attesters (k). The curve drops steeply: for nv=14600, the probability that 20 victims end up in the same committee is only 1.7×10−11, and even for 12 victims it remains below 0.3%. This confirms that large-scale attestation attacks are not only unprofitable but also statistically implausible under current network conditions.

Figure 2. Probability that k victim validators fall into the same attestation committee for an attester-based coordinated attack. The distribution reveals that, under realistic conditions, the chance of grouping enough victims within one committee is vanishingly small, reinforcing the impracticality of large-scale attestation attacks.

In both cases, the immediate loss per validator is roughly 1 ETH (1/32 of the staked balance). However, when many validators are attacked simultaneously, Ethereum’s correlation penalty multiplies the total loss according to how many others are slashed in the same 18-day window.

We simulated a scenario where a large non-custodial provider (e.g. controlling ~14.000 validators) turns malicious or is compromised. In that case, the correlation penalty could result in an average total loss of approximately 2.34 ETH per validator in extreme coordinated attacks—a substantial risk for small-scale stakers relying on third-party services.

Rational incentives and the economics of honesty

Yet, the key insight of our analysis is that rational attackers are economically disincentivized from exploiting these vulnerabilities.

Non-custodial staking services make money by taking small commissions—typically between 3% and 8% of Maximal Extractable Value (MEV) or priority fees—without ever touching users’ withdrawal addresses. Based on public MEV data (from platforms like eigenphi.io), a service operating around 14 000 validators can expect monthly revenues between $1.800 and $7.000, depending on network activity.

By contrast, even a perfectly timed coordinated slashing attack would yield, at best, a one-time profit of less than one ETH for the attacker, while irreversibly destroying its reputation and revenue stream. The rational strategy, therefore, is to remain honest.

Our conclusion aligns with the broader economic principle behind Ethereum’s security model: trustless systems work best when incentives make dishonesty irrational.

Practical tools for safer staking

The study also addresses a less-discussed but very practical problem: what can users do if their non-custodial service disappears?

Generating or verifying voluntary exit messages—the transactions that allow a validator to stop participating and withdraw its stake—is technically complex. Existing tools require users to deploy and sync a full Ethereum node, which is unrealistic for most stakers.

To mitigate this, we developed two lightweight Python scripts that make the process accessible:

  • generate_exit_message_holesky.py — creates a valid exit message from a simple mnemonic phrase and validator index.
  • validate_exit_message_holesky.py — checks the cryptographic correctness of an exit message using only the public validator key.

Both scripts are open source and available on GitHub. They are designed to empower users with minimal technical background to retain control over their validator lifecycle—even if the staking service becomes unavailable. Scripts are parametrized for being used either in holesky or in mainet.

Implications for the Ethereum ecosystem

Our findings contribute to a broader conversation about trust, risk, and autonomy in decentralized systems. As Ethereum evolves toward upgrades like EIP-7002, which will allow exit messages to be signed with the withdrawal key, many of today’s usability and security concerns could fade.

However, our work emphasizes that technical soundness alone does not guarantee safety—users must understand the economic dynamics and key-management responsibilities inherent in staking. Non-custodial services can indeed reduce centralization risks, but they also concentrate a new form of systemic risk: validator key correlation.

Balancing these forces—autonomy vs. complexity, decentralization vs. convenience—remains a central challenge for the next generation of staking infrastructure.

Conclusion

Non-custodial staking embodies Ethereum’s ideal of trust minimization but still demands careful attention to how trust is distributed. Our analysis shows that while malicious scenarios are technically possible, they are economically implausible under rational conditions.

By quantifying these risks and releasing practical open-source tools, we hope to make Ethereum staking more transparent, secure, and accessible—allowing users to participate confidently in the world’s largest decentralized consensus network.

Further reading:
Code repository: github.com/dmolinac/ethereum_exit_messages

Paper: https://ieeexplore.ieee.org/document/11229608