Skip to content

Commit b896104

Browse files
committed
Fix codespell
1 parent 98a7988 commit b896104

File tree

5 files changed

+7
-7
lines changed

5 files changed

+7
-7
lines changed

DESIGN.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ This is a perfectly valid approach when dealing exclusively with complete, large
3232

3333
First of all, it is very inefficient if you deal with a large number of tiny blobs, like we frequently do when working with [iroh-docs] or [iroh-willow] documents. Just the file system metadata for a tiny file will vastly exceed the storage needed for the data itself.
3434

35-
Also, now we are very much dependant on the quirks of whatever file system our target operating system has. Many older file systems like FAT32 or EXT2 are notoriously bad in handling directories with millions of files. And we can't just limit ourselves to e.g. linux servers with modern file systems, since we also want to support mobile platforms and windows PCs.
35+
Also, now we are very much dependent on the quirks of whatever file system our target operating system has. Many older file systems like FAT32 or EXT2 are notoriously bad in handling directories with millions of files. And we can't just limit ourselves to e.g. linux servers with modern file systems, since we also want to support mobile platforms and windows PCs.
3636

3737
And last but not least, creating a the metadata for a file is very expensive compared to writing a few bytes. We would be limited to a pathetically low download speed when bulk downloading millions of blobs, like for example an iroh collection containing the linux source code. For very small files embedded databases are [frequently faster](https://www.sqlite.org/fasterthanfs.html) than the file system.
3838

@@ -105,7 +105,7 @@ If we sync data from a remote node, we do know the hash but don't have the data.
105105

106106
### Blob deletion
107107

108-
On creation, blobs are tagged with a temporary tag that prevents them from being deleted for as long as the process lives. They can then be tagged with a persisten tag that prevents them from being deleted even after a restart. And last but not least, large groups of blobs can be protected from deletion in bulk by putting a sequence of hashes into a blob and tagging that blob as a hash sequence.
108+
On creation, blobs are tagged with a temporary tag that prevents them from being deleted for as long as the process lives. They can then be tagged with a persistent tag that prevents them from being deleted even after a restart. And last but not least, large groups of blobs can be protected from deletion in bulk by putting a sequence of hashes into a blob and tagging that blob as a hash sequence.
109109

110110
We also provide a way to explicitly delete blobs by hash, but that is meant to be used only in case of an emergency. You have some data that you want **gone** no matter how dire the consequences are.
111111

src/api/blobs.rs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -696,7 +696,7 @@ impl ObserveProgress {
696696
}
697697
}
698698

699-
/// A progess handle for an export operation.
699+
/// A progress handle for an export operation.
700700
///
701701
/// Internally this is a stream of [`ExportProgress`] items. Working with this
702702
/// stream directly can be inconvenient, so this struct provides some convenience

src/api/remote.rs

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1021,8 +1021,8 @@ async fn write_push_request(
10211021
Ok(request)
10221022
}
10231023

1024-
async fn write_observe_request(requst: ObserveRequest, stream: &mut SendStream) -> io::Result<()> {
1025-
let request = Request::Observe(requst);
1024+
async fn write_observe_request(request: ObserveRequest, stream: &mut SendStream) -> io::Result<()> {
1025+
let request = Request::Observe(request);
10261026
let request_bytes = postcard::to_allocvec(&request)
10271027
.map_err(|e| io::Error::new(io::ErrorKind::InvalidData, e))?;
10281028
stream.write_all(&request_bytes).await?;

src/protocol.rs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -412,7 +412,7 @@ pub enum Request {
412412
///
413413
/// Note that providers will in many cases reject this request, e.g. if
414414
/// they don't have write access to the store or don't want to ingest
415-
/// unknonwn data.
415+
/// unknown data.
416416
Push(PushRequest),
417417
/// Get multiple blobs in a single request, from a single provider
418418
///

src/store/fs.rs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@
4343
//!
4444
//! For tasks that are specific to a hash, a HashContext combines the task
4545
//! context with a slot from the table of the main actor that can be used
46-
//! to obtain an unqiue handle for the hash.
46+
//! to obtain an unique handle for the hash.
4747
//!
4848
//! # Runtime
4949
//!

0 commit comments

Comments
 (0)