Skip to content

Conversation

dknopik
Copy link
Member

@dknopik dknopik commented Jul 28, 2025

Proposed Changes

Replace poll_whilst_waiting_for_genesis with a simple sleep to genesis. This function will never be useful in production, as SSV requires an already existing network to bootstrap the SSV contract (and then supply the address of that contract to Anchor). The only reason against removing wait_for_genesis altogether is that someone might pre-deploy the contracts in a local devnet and spin up the node before genesis has occurred. However, this scenario does not require the extensive logging offered by poll_whilst_waiting_for_genesis.

Additional Info

This is useful for updating dependencies in a later step as get_lighthouse_staking was removed upstream. (see #452)

@dknopik
Copy link
Member Author

dknopik commented Jul 28, 2025

Meta: I was unsure what conventional commit type to pick for this one. feat feels wrong because we are removing a feature.

@dknopik dknopik added the ready-for-review This PR is ready to be reviewed label Jul 28, 2025
@diegomrsantos
Copy link
Member

The only reason against removing wait_for_genesis altogether is that someone might pre-deploy the contracts in a local devnet and spin up the node before genesis has occurred

IMO, we should have code only for the current needs. If something is necessary in the future, we add it. Isn't the network already running anyway?

@dknopik
Copy link
Member Author

dknopik commented Jul 28, 2025

IMO, we should have code only for the current needs. If something is necessary in the future, we add it. Isn't the network already running anyway?

The approach I described is valid already for construction of devnets - in particular, it can be used for ssv-mini as soon as there is development capacity to implement that approach there. Artificially introducing a blocker for that now seems counter-productive.

Isn't the network already running anyway?

In the scenario I described, genesis might lie in the future.

beacon_nodes: &BeaconNodeFallback<SystemTimeSlotClock>,
genesis_time: u64,
) -> Result<(), String> {
async fn wait_for_genesis(genesis_time: u64) -> Result<(), String> {
Copy link
Member

@diegomrsantos diegomrsantos Jul 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how about something like

use chrono::{DateTime, Utc};
use tokio::time::{interval, Duration};
use tracing::{info, debug};

/// Wait until `genesis_at`, emitting a progress line every 30 s.
/// This future is **cancellable** simply by dropping it (handled by TaskExecutor).
pub async fn wait_for_genesis(genesis_at: DateTime<Utc>) {
    // Restarted after genesis?
    if Utc::now() >= genesis_at {
        info!(?genesis_at, "Genesis already passed – skipping wait");
        return;
    }

    let mut ticker = interval(Duration::from_secs(30));

    loop {
        ticker.tick().await;                       // <-- only awaited future
        let now = Utc::now();
        if now >= genesis_at {
            info!(?genesis_at, "Genesis reached – continuing startup");
            break;
        }
        debug!(
            remaining = (genesis_at - now).num_seconds(),
            "Waiting for genesis…"
        );
    }
}

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Utc::now isn't necessary, although it could be a better option. But logging from time to time while waiting seems a good idea.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pushed a suggestion that avoids chono, wdyt?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good. I suggested chrono because it seems odd to have to handle an error when getting the current time

@diegomrsantos diegomrsantos requested a review from Copilot August 5, 2025 08:10
Copy link
Contributor

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR refactors the pre-genesis waiting logic by removing the complex polling mechanism and replacing it with a simple sleep until genesis time. The change removes the poll_whilst_waiting_for_genesis function that was checking beacon node staking status, as this functionality is not needed in production environments where SSV requires an existing network.

  • Replaces polling-based wait with a simple sleep-based approach
  • Removes the poll_whilst_waiting_for_genesis function and related constants
  • Simplifies the wait_for_genesis function to use a sleep with periodic logging

Comment on lines 775 to 778
_ = log_interval.tick() => info!(
seconds_to_wait = (genesis_time - now).as_secs(),
"Waiting for genesis",
),
Copy link

Copilot AI Aug 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The variable now is calculated outside the loop at line 752 and never updated. This will cause the logged seconds_to_wait to remain constant and potentially become inaccurate as time passes. You should recalculate now inside the loop or use a different approach to get the current remaining time.

Suggested change
_ = log_interval.tick() => info!(
seconds_to_wait = (genesis_time - now).as_secs(),
"Waiting for genesis",
),
_ = log_interval.tick() => {
let now = SystemTime::now()
.duration_since(UNIX_EPOCH)
.expect("Unable to read system time");
info!(
seconds_to_wait = (genesis_time - now).as_secs(),
"Waiting for genesis",
);
},

Copilot uses AI. Check for mistakes.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh thanks mate

@dknopik dknopik added the v0.3.0 Third and final testnet-only release label Aug 5, 2025
//
// If the validator client starts before genesis, it will get errors from
// the slot clock.
let now = get_now()?;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to create it, or can we use the fn directly?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

diegomrsantos
diegomrsantos previously approved these changes Aug 5, 2025
tokio::select! {
result = poll_whilst_waiting_for_genesis(beacon_nodes, genesis_time) => result?,
() = sleep(genesis_time - now) => ()
let genesis_sleep = sleep(genesis_time - get_now()?);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we return this result or zero if it's negative? Then we don't need the if/else, and we can have a log "Genesis occurred" inside the loop. Wdyt?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice idea. wdyt?

biased;
_ = &mut genesis_sleep => break,
_ = log_interval.tick() => {
let seconds_to_wait = genesis_sleep.deadline().duration_since(Instant::now()).as_secs();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for another one, could we use Instant::now() for the initial now too? Also, do you think we need a log for when genesis happens?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not think so, as we need some association to the system time. Instant in opaque, we can't really derive the current time of the system through it. It works further below because we piggyback off the sleep, which has an instant as deadline.

I do not care strongly about the log.

Copy link
Member

@diegomrsantos diegomrsantos left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot!

@dknopik dknopik added ready-for-merge and removed ready-for-review This PR is ready to be reviewed labels Aug 5, 2025
mergify bot added a commit that referenced this pull request Aug 5, 2025
@mergify mergify bot merged commit 1957500 into sigp:unstable Aug 5, 2025
16 checks passed
diegomrsantos pushed a commit to diegomrsantos/anchor that referenced this pull request Sep 17, 2025
Replace `poll_whilst_waiting_for_genesis` with a simple sleep to genesis. This function will never be useful in production, as SSV requires an already existing network to bootstrap the SSV contract (and then supply the address of that contract to Anchor). The only reason against removing `wait_for_genesis` altogether is that someone might pre-deploy the contracts in a local devnet and spin up the node before genesis has occurred. However, this scenario does not require the extensive logging offered by `poll_whilst_waiting_for_genesis`.
jking-aus pushed a commit to jking-aus/anchor that referenced this pull request Oct 8, 2025
Replace `poll_whilst_waiting_for_genesis` with a simple sleep to genesis. This function will never be useful in production, as SSV requires an already existing network to bootstrap the SSV contract (and then supply the address of that contract to Anchor). The only reason against removing `wait_for_genesis` altogether is that someone might pre-deploy the contracts in a local devnet and spin up the node before genesis has occurred. However, this scenario does not require the extensive logging offered by `poll_whilst_waiting_for_genesis`.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready-for-merge v0.3.0 Third and final testnet-only release

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants