Skip to content

Clean up doc links and enforce them in CI #848

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 6 commits into from
Mar 18, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
31 changes: 21 additions & 10 deletions background-processor/src/lib.rs
Original file line number Diff line number Diff line change
@@ -1,3 +1,11 @@
//! Utilities that take care of tasks that (1) need to happen periodically to keep Rust-Lightning
//! running properly, and (2) either can or should be run in the background. See docs for
//! [`BackgroundProcessor`] for more details on the nitty-gritty.

#![deny(broken_intra_doc_links)]
#![deny(missing_docs)]
#![deny(unsafe_code)]

#[macro_use] extern crate lightning;

use lightning::chain;
Expand Down Expand Up @@ -40,18 +48,21 @@ impl BackgroundProcessor {
/// Start a background thread that takes care of responsibilities enumerated in the top-level
/// documentation.
///
/// If `persist_manager` returns an error, then this thread will return said error (and `start()`
/// will need to be called again to restart the `BackgroundProcessor`). Users should wait on
/// [`thread_handle`]'s `join()` method to be able to tell if and when an error is returned, or
/// implement `persist_manager` such that an error is never returned to the `BackgroundProcessor`
/// If `persist_manager` returns an error, then this thread will return said error (and
/// `start()` will need to be called again to restart the `BackgroundProcessor`). Users should
/// wait on [`thread_handle`]'s `join()` method to be able to tell if and when an error is
/// returned, or implement `persist_manager` such that an error is never returned to the
/// `BackgroundProcessor`
///
/// `persist_manager` is responsible for writing out the `ChannelManager` to disk, and/or uploading
/// to one or more backup services. See [`ChannelManager::write`] for writing out a `ChannelManager`.
/// See [`FilesystemPersister::persist_manager`] for Rust-Lightning's provided implementation.
/// `persist_manager` is responsible for writing out the [`ChannelManager`] to disk, and/or
/// uploading to one or more backup services. See [`ChannelManager::write`] for writing out a
/// [`ChannelManager`]. See [`FilesystemPersister::persist_manager`] for Rust-Lightning's
/// provided implementation.
///
/// [`thread_handle`]: struct.BackgroundProcessor.html#structfield.thread_handle
/// [`ChannelManager::write`]: ../lightning/ln/channelmanager/struct.ChannelManager.html#method.write
/// [`FilesystemPersister::persist_manager`]: ../lightning_persister/struct.FilesystemPersister.html#impl
/// [`thread_handle`]: BackgroundProcessor::thread_handle
/// [`ChannelManager`]: lightning::ln::channelmanager::ChannelManager
/// [`ChannelManager::write`]: lightning::ln::channelmanager::ChannelManager#impl-Writeable
/// [`FilesystemPersister::persist_manager`]: lightning_persister::FilesystemPersister::persist_manager
pub fn start<PM, Signer, M, T, K, F, L>(persist_manager: PM, manager: Arc<ChannelManager<Signer, Arc<M>, Arc<T>, Arc<K>, Arc<F>, Arc<L>>>, logger: Arc<L>) -> Self
where Signer: 'static + Sign,
M: 'static + chain::Watch<Signer>,
Expand Down
2 changes: 2 additions & 0 deletions ci/check-compiles.sh
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,6 @@ set -e
set -x
echo Testing $(git log -1 --oneline)
cargo check
cargo doc
cargo doc --document-private-items
cd fuzz && cargo check --features=stdin_fuzz
3 changes: 3 additions & 0 deletions lightning-block-sync/src/http.rs
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
//! Simple HTTP implementation which supports both async and traditional execution environments
//! with minimal dependencies. This is used as the basis for REST and RPC clients.

use chunked_transfer;
use serde_json;

Expand Down
11 changes: 8 additions & 3 deletions lightning-block-sync/src/init.rs
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
//! Utilities to assist in the initial sync required to initialize or reload Rust-Lightning objects
//! from disk.

use crate::{BlockSource, BlockSourceResult, Cache, ChainNotifier};
use crate::poll::{ChainPoller, Validate, ValidatedBlockHeader};

Expand All @@ -11,6 +14,8 @@ use lightning::chain;
///
/// Upon success, the returned header can be used to initialize [`SpvClient`]. Useful during a fresh
/// start when there are no chain listeners to sync yet.
///
/// [`SpvClient`]: crate::SpvClient
pub async fn validate_best_block_header<B: BlockSource>(block_source: &mut B) ->
BlockSourceResult<ValidatedBlockHeader> {
let (best_block_hash, best_block_height) = block_source.get_best_block().await?;
Expand Down Expand Up @@ -113,9 +118,9 @@ BlockSourceResult<ValidatedBlockHeader> {
/// }
/// ```
///
/// [`SpvClient`]: ../struct.SpvClient.html
/// [`ChannelManager`]: ../../lightning/ln/channelmanager/struct.ChannelManager.html
/// [`ChannelMonitor`]: ../../lightning/chain/channelmonitor/struct.ChannelMonitor.html
/// [`SpvClient`]: crate::SpvClient
/// [`ChannelManager`]: lightning::ln::channelmanager::ChannelManager
/// [`ChannelMonitor`]: lightning::chain::channelmonitor::ChannelMonitor
pub async fn synchronize_listeners<B: BlockSource, C: Cache>(
block_source: &mut B,
network: Network,
Expand Down
16 changes: 7 additions & 9 deletions lightning-block-sync/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -12,9 +12,10 @@
//!
//! Both features support either blocking I/O using `std::net::TcpStream` or, with feature `tokio`,
//! non-blocking I/O using `tokio::net::TcpStream` from inside a Tokio runtime.
//!
//! [`SpvClient`]: struct.SpvClient.html
//! [`BlockSource`]: trait.BlockSource.html

#![deny(broken_intra_doc_links)]
#![deny(missing_docs)]
#![deny(unsafe_code)]

#[cfg(any(feature = "rest-client", feature = "rpc-client"))]
pub mod http;
Expand Down Expand Up @@ -69,8 +70,7 @@ pub trait BlockSource : Sync + Send {
/// When polling a block source, [`Poll`] implementations may pass the height to [`get_header`]
/// to allow for a more efficient lookup.
///
/// [`Poll`]: poll/trait.Poll.html
/// [`get_header`]: #tymethod.get_header
/// [`get_header`]: Self::get_header
fn get_best_block<'a>(&'a mut self) -> AsyncBlockSourceResult<(BlockHash, Option<u32>)>;
}

Expand Down Expand Up @@ -176,8 +176,6 @@ where L::Target: chain::Listen {
/// Implementations may define how long to retain headers such that it's unlikely they will ever be
/// needed to disconnect a block. In cases where block sources provide access to headers on stale
/// forks reliably, caches may be entirely unnecessary.
///
/// [`ChainNotifier`]: struct.ChainNotifier.html
pub trait Cache {
/// Retrieves the block header keyed by the given block hash.
fn look_up(&self, block_hash: &BlockHash) -> Option<&ValidatedBlockHeader>;
Expand Down Expand Up @@ -218,7 +216,7 @@ impl<'a, P: Poll, C: Cache, L: Deref> SpvClient<'a, P, C, L> where L::Target: ch
/// * `header_cache` is used to look up and store headers on the best chain
/// * `chain_listener` is notified of any blocks connected or disconnected
///
/// [`poll_best_tip`]: struct.SpvClient.html#method.poll_best_tip
/// [`poll_best_tip`]: SpvClient::poll_best_tip
pub fn new(
chain_tip: ValidatedBlockHeader,
chain_poller: P,
Expand Down Expand Up @@ -273,7 +271,7 @@ impl<'a, P: Poll, C: Cache, L: Deref> SpvClient<'a, P, C, L> where L::Target: ch

/// Notifies [listeners] of blocks that have been connected or disconnected from the chain.
///
/// [listeners]: ../../lightning/chain/trait.Listen.html
/// [listeners]: lightning::chain::Listen
pub struct ChainNotifier<'a, C: Cache, L: Deref> where L::Target: chain::Listen {
/// Cache for looking up headers before fetching from a block source.
header_cache: &'a mut C,
Expand Down
2 changes: 2 additions & 0 deletions lightning-block-sync/src/poll.rs
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
//! Adapters that make one or more [`BlockSource`]s simpler to poll for new chain tip transitions.

use crate::{AsyncBlockSourceResult, BlockHeaderData, BlockSource, BlockSourceError, BlockSourceResult};

use bitcoin::blockdata::block::Block;
Expand Down
3 changes: 3 additions & 0 deletions lightning-block-sync/src/rest.rs
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
//! Simple REST client implementation which implements [`BlockSource`] against a Bitcoin Core REST
//! endpoint.

use crate::{BlockHeaderData, BlockSource, AsyncBlockSourceResult};
use crate::http::{BinaryResponse, HttpEndpoint, HttpClient, JsonResponse};

Expand Down
3 changes: 3 additions & 0 deletions lightning-block-sync/src/rpc.rs
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
//! Simple RPC client implementation which implements [`BlockSource`] against a Bitcoin Core RPC
//! endpoint.

use crate::{BlockHeaderData, BlockSource, AsyncBlockSourceResult};
use crate::http::{HttpClient, HttpEndpoint, JsonResponse};

Expand Down
3 changes: 3 additions & 0 deletions lightning-net-tokio/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -72,6 +72,9 @@
//! }
//! ```

#![deny(broken_intra_doc_links)]
#![deny(missing_docs)]

use bitcoin::secp256k1::key::PublicKey;

use tokio::net::TcpStream;
Expand Down
6 changes: 6 additions & 0 deletions lightning-persister/src/lib.rs
Original file line number Diff line number Diff line change
@@ -1,3 +1,8 @@
//! Utilities that handle persisting Rust-Lightning data to disk via standard filesystem APIs.

#![deny(broken_intra_doc_links)]
#![deny(missing_docs)]

mod util;

extern crate lightning;
Expand Down Expand Up @@ -72,6 +77,7 @@ impl FilesystemPersister {
}
}

/// Get the directory which was provided when this persister was initialized.
pub fn get_data_dir(&self) -> String {
self.path_to_channel_data.clone()
}
Expand Down
31 changes: 7 additions & 24 deletions lightning/src/chain/chainmonitor.rs
Original file line number Diff line number Diff line change
Expand Up @@ -13,21 +13,15 @@
//! update [`ChannelMonitor`]s accordingly. If any on-chain events need further processing, it will
//! make those available as [`MonitorEvent`]s to be consumed.
//!
//! `ChainMonitor` is parameterized by an optional chain source, which must implement the
//! [`ChainMonitor`] is parameterized by an optional chain source, which must implement the
//! [`chain::Filter`] trait. This provides a mechanism to signal new relevant outputs back to light
//! clients, such that transactions spending those outputs are included in block data.
//!
//! `ChainMonitor` may be used directly to monitor channels locally or as a part of a distributed
//! setup to monitor channels remotely. In the latter case, a custom `chain::Watch` implementation
//! [`ChainMonitor`] may be used directly to monitor channels locally or as a part of a distributed
//! setup to monitor channels remotely. In the latter case, a custom [`chain::Watch`] implementation
//! would be responsible for routing each update to a remote server and for retrieving monitor
//! events. The remote server would make use of `ChainMonitor` for block processing and for
//! servicing `ChannelMonitor` updates from the client.
//!
//! [`ChainMonitor`]: struct.ChainMonitor.html
//! [`chain::Filter`]: ../trait.Filter.html
//! [`chain::Watch`]: ../trait.Watch.html
//! [`ChannelMonitor`]: ../channelmonitor/struct.ChannelMonitor.html
//! [`MonitorEvent`]: ../channelmonitor/enum.MonitorEvent.html
//! events. The remote server would make use of [`ChainMonitor`] for block processing and for
//! servicing [`ChannelMonitor`] updates from the client.

use bitcoin::blockdata::block::{Block, BlockHeader};

Expand All @@ -53,9 +47,8 @@ use std::ops::Deref;
/// or used independently to monitor channels remotely. See the [module-level documentation] for
/// details.
///
/// [`chain::Watch`]: ../trait.Watch.html
/// [`ChannelManager`]: ../../ln/channelmanager/struct.ChannelManager.html
/// [module-level documentation]: index.html
/// [`ChannelManager`]: crate::ln::channelmanager::ChannelManager
/// [module-level documentation]: crate::chain::chainmonitor
pub struct ChainMonitor<ChannelSigner: Sign, C: Deref, T: Deref, F: Deref, L: Deref, P: Deref>
where C::Target: chain::Filter,
T::Target: BroadcasterInterface,
Expand Down Expand Up @@ -88,10 +81,6 @@ where C::Target: chain::Filter,
/// calls must not exclude any transactions matching the new outputs nor any in-block
/// descendants of such transactions. It is not necessary to re-fetch the block to obtain
/// updated `txdata`.
///
/// [`ChannelMonitor::block_connected`]: ../channelmonitor/struct.ChannelMonitor.html#method.block_connected
/// [`chain::Watch::release_pending_monitor_events`]: ../trait.Watch.html#tymethod.release_pending_monitor_events
/// [`chain::Filter`]: ../trait.Filter.html
pub fn block_connected(&self, header: &BlockHeader, txdata: &TransactionData, height: u32) {
let monitors = self.monitors.read().unwrap();
for monitor in monitors.values() {
Expand All @@ -110,8 +99,6 @@ where C::Target: chain::Filter,
/// Dispatches to per-channel monitors, which are responsible for updating their on-chain view
/// of a channel based on the disconnected block. See [`ChannelMonitor::block_disconnected`] for
/// details.
///
/// [`ChannelMonitor::block_disconnected`]: ../channelmonitor/struct.ChannelMonitor.html#method.block_disconnected
pub fn block_disconnected(&self, header: &BlockHeader, disconnected_height: u32) {
let monitors = self.monitors.read().unwrap();
for monitor in monitors.values() {
Expand All @@ -126,8 +113,6 @@ where C::Target: chain::Filter,
/// pre-filter blocks or only fetch blocks matching a compact filter. Otherwise, clients may
/// always need to fetch full blocks absent another means for determining which blocks contain
/// transactions relevant to the watched channels.
///
/// [`chain::Filter`]: ../trait.Filter.html
pub fn new(chain_source: Option<C>, broadcaster: T, logger: L, feeest: F, persister: P) -> Self {
Self {
monitors: RwLock::new(HashMap::new()),
Expand Down Expand Up @@ -174,8 +159,6 @@ where C::Target: chain::Filter,
///
/// Note that we persist the given `ChannelMonitor` while holding the `ChainMonitor`
/// monitors lock.
///
/// [`chain::Filter`]: ../trait.Filter.html
fn watch_channel(&self, funding_outpoint: OutPoint, monitor: ChannelMonitor<ChannelSigner>) -> Result<(), ChannelMonitorUpdateErr> {
let mut monitors = self.monitors.write().unwrap();
let entry = match monitors.entry(funding_outpoint) {
Expand Down
20 changes: 2 additions & 18 deletions lightning/src/chain/channelmonitor.rs
Original file line number Diff line number Diff line change
Expand Up @@ -19,8 +19,6 @@
//! ChannelMonitors should do so). Thus, if you're building rust-lightning into an HSM or other
//! security-domain-separated system design, you should consider having multiple paths for
//! ChannelMonitors to get out of the HSM and onto monitoring devices.
//!
//! [`chain::Watch`]: ../trait.Watch.html

use bitcoin::blockdata::block::{Block, BlockHeader};
use bitcoin::blockdata::transaction::{TxOut,Transaction};
Expand Down Expand Up @@ -75,8 +73,6 @@ pub struct ChannelMonitorUpdate {
/// The only instance where update_id values are not strictly increasing is the case where we
/// allow post-force-close updates with a special update ID of [`CLOSED_CHANNEL_UPDATE_ID`]. See
/// its docs for more details.
///
/// [`CLOSED_CHANNEL_UPDATE_ID`]: constant.CLOSED_CHANNEL_UPDATE_ID.html
pub update_id: u64,
}

Expand Down Expand Up @@ -193,8 +189,6 @@ pub enum MonitorEvent {
/// Simple structure sent back by `chain::Watch` when an HTLC from a forward channel is detected on
/// chain. Used to update the corresponding HTLC in the backward channel. Failing to pass the
/// preimage claim backward will lead to loss of funds.
///
/// [`chain::Watch`]: ../trait.Watch.html
#[derive(Clone, PartialEq)]
pub struct HTLCUpdate {
pub(crate) payment_hash: PaymentHash,
Expand Down Expand Up @@ -1187,8 +1181,6 @@ impl<Signer: Sign> ChannelMonitor<Signer> {

/// Get the list of HTLCs who's status has been updated on chain. This should be called by
/// ChannelManager via [`chain::Watch::release_pending_monitor_events`].
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are parentheses needed here or are they optional? I noticed that you add them elsewhere.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe parentheses are only required when the link is ambiguous in some way. cargo doc fails with an ambiguity error in that case.

///
/// [`chain::Watch::release_pending_monitor_events`]: ../trait.Watch.html#tymethod.release_pending_monitor_events
pub fn get_and_clear_pending_monitor_events(&self) -> Vec<MonitorEvent> {
self.inner.lock().unwrap().get_and_clear_pending_monitor_events()
}
Expand Down Expand Up @@ -2450,11 +2442,8 @@ pub trait Persist<ChannelSigner: Sign>: Send + Sync {
/// stored channel data). Note that you **must** persist every new monitor to
/// disk. See the `Persist` trait documentation for more details.
///
/// See [`ChannelMonitor::serialize_for_disk`] for writing out a `ChannelMonitor`,
/// See [`ChannelMonitor::write`] for writing out a `ChannelMonitor`,
/// and [`ChannelMonitorUpdateErr`] for requirements when returning errors.
///
/// [`ChannelMonitor::serialize_for_disk`]: struct.ChannelMonitor.html#method.serialize_for_disk
/// [`ChannelMonitorUpdateErr`]: enum.ChannelMonitorUpdateErr.html
fn persist_new_channel(&self, id: OutPoint, data: &ChannelMonitor<ChannelSigner>) -> Result<(), ChannelMonitorUpdateErr>;

/// Update one channel's data. The provided `ChannelMonitor` has already
Expand All @@ -2476,14 +2465,9 @@ pub trait Persist<ChannelSigner: Sign>: Send + Sync {
/// them in batches. The size of each monitor grows `O(number of state updates)`
/// whereas updates are small and `O(1)`.
///
/// See [`ChannelMonitor::serialize_for_disk`] for writing out a `ChannelMonitor`,
/// See [`ChannelMonitor::write`] for writing out a `ChannelMonitor`,
/// [`ChannelMonitorUpdate::write`] for writing out an update, and
/// [`ChannelMonitorUpdateErr`] for requirements when returning errors.
///
/// [`ChannelMonitor::update_monitor`]: struct.ChannelMonitor.html#impl-1
/// [`ChannelMonitor::serialize_for_disk`]: struct.ChannelMonitor.html#method.serialize_for_disk
/// [`ChannelMonitorUpdate::write`]: struct.ChannelMonitorUpdate.html#method.write
/// [`ChannelMonitorUpdateErr`]: enum.ChannelMonitorUpdateErr.html
fn update_persisted_channel(&self, id: OutPoint, update: &ChannelMonitorUpdate, data: &ChannelMonitor<ChannelSigner>) -> Result<(), ChannelMonitorUpdateErr>;
}

Expand Down
23 changes: 10 additions & 13 deletions lightning/src/chain/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -25,8 +25,6 @@ pub mod transaction;
pub mod keysinterface;

/// An error when accessing the chain via [`Access`].
///
/// [`Access`]: trait.Access.html
#[derive(Clone)]
pub enum AccessError {
/// The requested chain is unknown.
Expand Down Expand Up @@ -77,28 +75,28 @@ pub trait Listen {
/// funds in the channel. See [`ChannelMonitorUpdateErr`] for more details about how to handle
/// multiple instances.
///
/// [`ChannelMonitor`]: channelmonitor/struct.ChannelMonitor.html
/// [`ChannelMonitorUpdateErr`]: channelmonitor/enum.ChannelMonitorUpdateErr.html
/// [`PermanentFailure`]: channelmonitor/enum.ChannelMonitorUpdateErr.html#variant.PermanentFailure
/// [`ChannelMonitor`]: channelmonitor::ChannelMonitor
/// [`ChannelMonitorUpdateErr`]: channelmonitor::ChannelMonitorUpdateErr
/// [`PermanentFailure`]: channelmonitor::ChannelMonitorUpdateErr::PermanentFailure
pub trait Watch<ChannelSigner: Sign>: Send + Sync {
/// Watches a channel identified by `funding_txo` using `monitor`.
///
/// Implementations are responsible for watching the chain for the funding transaction along
/// with any spends of outputs returned by [`get_outputs_to_watch`]. In practice, this means
/// calling [`block_connected`] and [`block_disconnected`] on the monitor.
///
/// [`get_outputs_to_watch`]: channelmonitor/struct.ChannelMonitor.html#method.get_outputs_to_watch
/// [`block_connected`]: channelmonitor/struct.ChannelMonitor.html#method.block_connected
/// [`block_disconnected`]: channelmonitor/struct.ChannelMonitor.html#method.block_disconnected
/// [`get_outputs_to_watch`]: channelmonitor::ChannelMonitor::get_outputs_to_watch
/// [`block_connected`]: channelmonitor::ChannelMonitor::block_connected
/// [`block_disconnected`]: channelmonitor::ChannelMonitor::block_disconnected
fn watch_channel(&self, funding_txo: OutPoint, monitor: ChannelMonitor<ChannelSigner>) -> Result<(), ChannelMonitorUpdateErr>;

/// Updates a channel identified by `funding_txo` by applying `update` to its monitor.
///
/// Implementations must call [`update_monitor`] with the given update. See
/// [`ChannelMonitorUpdateErr`] for invariants around returning an error.
///
/// [`update_monitor`]: channelmonitor/struct.ChannelMonitor.html#method.update_monitor
/// [`ChannelMonitorUpdateErr`]: channelmonitor/enum.ChannelMonitorUpdateErr.html
/// [`update_monitor`]: channelmonitor::ChannelMonitor::update_monitor
/// [`ChannelMonitorUpdateErr`]: channelmonitor::ChannelMonitorUpdateErr
fn update_channel(&self, funding_txo: OutPoint, update: ChannelMonitorUpdate) -> Result<(), ChannelMonitorUpdateErr>;

/// Returns any monitor events since the last call. Subsequent calls must only return new
Expand All @@ -120,11 +118,10 @@ pub trait Watch<ChannelSigner: Sign>: Send + Sync {
///
/// Note that use as part of a [`Watch`] implementation involves reentrancy. Therefore, the `Filter`
/// should not block on I/O. Implementations should instead queue the newly monitored data to be
/// processed later. Then, in order to block until the data has been processed, any `Watch`
/// processed later. Then, in order to block until the data has been processed, any [`Watch`]
/// invocation that has called the `Filter` must return [`TemporaryFailure`].
///
/// [`Watch`]: trait.Watch.html
/// [`TemporaryFailure`]: channelmonitor/enum.ChannelMonitorUpdateErr.html#variant.TemporaryFailure
/// [`TemporaryFailure`]: channelmonitor::ChannelMonitorUpdateErr::TemporaryFailure
/// [BIP 157]: https://github.com/bitcoin/bips/blob/master/bip-0157.mediawiki
/// [BIP 158]: https://github.com/bitcoin/bips/blob/master/bip-0158.mediawiki
pub trait Filter: Send + Sync {
Expand Down
Loading