diff --git a/docs/guides/run-node/run-an-intuition-node.md b/docs/guides/run-node/run-an-intuition-node.md index 71923d3..0a2b221 100644 --- a/docs/guides/run-node/run-an-intuition-node.md +++ b/docs/guides/run-node/run-an-intuition-node.md @@ -8,7 +8,7 @@ Learn how to set up and run your own Intuition node to participate in the networ ## Overview -The `intuition-rs` workspace contains the Rust implementation of Intuition's off-chain backend systems. This implementation provides high performance, memory safety, and reliability for running Intuition nodes and backend services. +The `intuition-rs` workspace is a comprehensive Rust workspace for blockchain data indexing and processing, featuring a modular architecture with multiple specialized services. This implementation provides high performance, memory safety, and reliability for running Intuition nodes and backend services.

Node Requirements

@@ -17,59 +17,49 @@ Running an Intuition node requires Docker, Rust toolchain, and proper environmen

-## Workspace Components +## Architecture -The intuition-rs workspace contains the following crates: +This workspace contains the following core services:

Core Services

-CLI: TUI client for node interaction
-Consumer: RAW, DECODED and RESOLVER consumers
-Consumer API: API to re-fetch Atoms +CLI: Terminal UI client for interacting with the Intuition system
+Consumer: Event processing pipeline using Redis Streams (RAW, DECODED, and RESOLVER consumers)
+Models: Domain models and data structures for the Intuition system

CLI Consumer -
-
- -
-

Indexing & Data

-

-Envio Indexer: Index base-sepolia contract events
-Hasura: Migrations and configuration
-Models: Domain models for intuition data -

-
-Indexing Models
-

Infrastructure

+

Infrastructure Services

-Histoflux: Stream events to SQS queue
-Image Guard: Image validation service
-RPC Proxy: Cache RPC calls and responses +Hasura: GraphQL API with database migrations and configuration
+Image Guard: Image processing and validation service
+RPC Proxy: RPC call proxy with caching for eth_call methods

-Streaming +GraphQL Proxy
-

Event Processing

+

Supporting Services

-Substreams Sink: Consume Substreams events for real-time processing and indexing across multiple blockchain networks. +Histocrawler: Historical data crawler
+Shared Utils: Common utilities and shared code
+Migration Scripts: Database migration utilities

-Events -Real-time +Crawler +Utils
@@ -84,25 +74,25 @@ The intuition-rs workspace contains the following crates:
-cargo make: Install with cargo install --force cargo-make +Rust toolchain: Install with curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
-hasura-cli: Required for Hasura operations +cargo-make: Install with cargo install --force cargo-make
-Docker: For containerized deployment +Hasura CLI: Install with curl -L https://github.com/hasura/graphql-engine/raw/stable/cli/get.sh | bash
-Rust toolchain: For building from source +Node.js: For integration tests (install pnpm dependencies)
@@ -110,124 +100,112 @@ The intuition-rs workspace contains the following crates: You'll need to set up environment variables for various services. Create a `.env` file based on the `.env.sample` template with the following required variables: -- `PINATA_GATEWAY_TOKEN`: Get from Pinata -- `PINATA_API_JWT`: Get from Pinata -- `RPC_URL_MAINNET`: Alchemy mainnet endpoint -- `RPC_URL_BASE`: Alchemy Base endpoint -- `AWS_ACCESS_KEY_ID`: Your AWS credentials -- `AWS_SECRET_ACCESS_KEY`: Your AWS credentials -- `HF_TOKEN`: Hugging Face token -- `SUBSTREAMS_API_TOKEN`: Substreams API token -- `HYPERSYNC_TOKEN`: Envio token - -## Installation Options +| Variable | Description | Source | +|----------|-------------|---------| +| `OPENAI_API_KEY` | OpenAI API key for AI features | [OpenAI Platform](https://platform.openai.com/api-keys) | +| `PINATA_GATEWAY_TOKEN` | Pinata gateway token for IPFS | [Pinata Dashboard](https://app.pinata.cloud/developers/gateway-settings) | +| `PINATA_API_JWT` | Pinata API JWT for IPFS uploads | [Pinata Dashboard](https://app.pinata.cloud/developers/api-keys) | +| `BASE_MAINNET_RPC_URL` | Base mainnet RPC endpoint | [Alchemy Dashboard](https://dashboard.alchemy.com/apps) | +| `BASE_SEPOLIA_RPC_URL` | Base sepolia testnet RPC endpoint | [Alchemy Dashboard](https://dashboard.alchemy.com/apps) | +| `ETHEREUM_MAINNET_RPC_URL` | Ethereum mainnet RPC endpoint | [Alchemy Dashboard](https://dashboard.alchemy.com/apps) | +| `LINEA_MAINNET_RPC_URL` | Linea mainnet RPC endpoint | [Alchemy Dashboard](https://dashboard.alchemy.com/apps) | +| `LINEA_SEPOLIA_RPC_URL` | Linea sepolia testnet RPC endpoint | [Alchemy Dashboard](https://dashboard.alchemy.com/apps) | +| `TRUST_TESTNET_RPC_URL` | Trust testnet RPC endpoint (local geth) | Local development | +| `TRUST_MAINNET_RPC_URL` | Trust mainnet RPC endpoint (local geth) | Local development | +| `INDEXER_SCHEMA` | Database schema for indexer (set to "local") | Local development | +| `INTUITION_CONTRACT_ADDRESS` | Intuition contract address | Contract deployment | + +## Running the System + +**Note**: All scripts are located in the `scripts/` directory and should be run from the project root. ### Option 1: Using Published Docker Images (Recommended) -The fastest way to get started is using the published Docker images: - -```bash -# Clone the repository -git clone https://github.com/0xIntuition/intuition-rs.git -cd intuition-rs - -# Start all services -./start.sh - -# Verify the setup with CLI tool -./cli.sh -``` - -To stop all services: ```bash -./stop.sh -``` - -To restart and clear volumes: -```bash -./restart.sh +# Start with local Ethereum node +cargo make start-local ``` ### Option 2: Building from Source -For development or custom builds: - ```bash -# Clone and setup -git clone https://github.com/0xIntuition/intuition-rs.git -cd intuition-rs -cp .env.sample .env -# Edit .env with your configuration +# Build all Docker images from source +cargo make build-docker-images -# Start with source build -source .env -cargo make start-docker-and-migrate +# Start the system +cargo make start-local ``` -### Option 3: Local Development Mode - -For development and testing: +### Option 3: Running with Integration Tests ```bash -# Setup environment -cp .env.sample .env -source .env - -# Run raw consumer (local SQS) -RUST_LOG=info cargo run --bin consumer --features local --mode raw --local - -# Run decoded consumer (local SQS) -RUST_LOG=info cargo run --bin consumer --features local --mode decoded --local +# Start with tests enabled +cargo make start-local test +``` -# Run raw consumer (remote SQS) -RUST_LOG=info cargo run --bin consumer --mode raw +## Testing -# Run decoded consumer (remote SQS) -RUST_LOG=info cargo run --bin consumer --mode decoded +### Run All Tests +```bash +cargo nextest run ``` -## Kubernetes Deployment (macOS) - -For production deployments on macOS: +### Run Integration Tests +```bash +cd integration-tests +export VITE_INTUITION_CONTRACT_ADDRESS=0x.... +pnpm test src/follow.test.ts +``` +### Run Specific Test Suites ```bash -# Install required tools -brew install minikube -brew install k9s +# Test account operations +pnpm test src/create-person.test.ts -# Create secrets from .env file -kubectl create secret generic secrets --from-env-file=.env +# Test vault operations +pnpm test src/vaults.test.ts -# Start cluster -minikube start +# Test AI agents +pnpm test src/ai-agents.test.ts +``` -# Deploy services -kubectl apply -k kube_files/ +## Development -# Restart services if needed -kubectl rollout restart deployment +### CLI Tool +```bash +# Run the CLI to verify latest data +./scripts/cli.sh ``` -## Database Management +### Code Quality +```bash +# Format code +cargo make fmt -### Running Migrations +# Run linter +cargo make clippy -If you need to re-run database migrations: +# Run all checks +cargo make check +``` +### Database Operations ```bash -docker compose down -v -docker compose up -d --force-recreate -cargo make migrate-database +# Start services and run migrations +cargo make start-docker-and-migrate + +# Manual migration (if needed) +cp .env.sample .env +source .env ``` -### Using Local Ethereum Node +## Local Development Setup -Add these to your `.env` file for local development: +### Using Local Ethereum Node +Add to your `.env` file: ```bash -BASE_MAINNET_RPC_URL=http://geth:8545 -BASE_SEPOLIA_RPC_URL=http://geth:8545 -INTUITION_CONTRACT_ADDRESS=0x04056c43d0498b22f7a0c60d4c3584fb5fa881cc +INTUITION_CONTRACT_ADDRESS=0xB92EA1B47E4ABD0a520E9138BB59dBd1bC6C475B START_BLOCK=0 ``` @@ -238,23 +216,102 @@ npm install npm run create-predicates ``` -## Testing - -Run the test suite: +### Manual Service Management ```bash -cargo nextest run +# Start all services +docker-compose -f docker/docker-compose-apps.yml up -d + +# Stop all services +./scripts/stop.sh + +# View logs +docker-compose -f docker/docker-compose-apps.yml logs -f ``` -## Development Commands +## Project Structure + +``` +intuition-rs/ +├── apps/ # Custom Rust applications +│ ├── cli/ # Terminal UI client +│ ├── consumer/ # Event processing pipeline (Redis Streams) +│ ├── histocrawler/ # Historical data crawler +│ ├── image-guard/ # Image processing service +│ ├── models/ # Domain models & data structures +│ ├── rpc-proxy/ # RPC proxy with caching +│ └── shared-utils/ # Common utilities +├── infrastructure/ # Infrastructure components +│ ├── hasura/ # GraphQL API & migrations +│ ├── blockscout/ # Blockchain explorer +│ ├── drizzle/ # Database schema management +│ ├── geth/ # Local Ethereum node config +│ ├── indexer-and-cache-migrations/ # Database migrations +│ ├── migration-scripts/ # Migration utilities +│ └── prometheus/ # Monitoring configuration +├── docker/ # Docker configuration +│ ├── docker-compose-apps.yml # Application services +│ ├── docker-compose-shared.yml # Shared infrastructure +│ └── Dockerfile # Multi-stage build +├── scripts/ # Shell scripts +│ ├── start.sh # System startup +│ ├── stop.sh # System shutdown +│ ├── cli.sh # CLI runner +│ ├── init-dbs.sh # Database initialization +├── integration-tests/ # End-to-end tests +└── README.md # This file +``` + +## Event Processing Pipeline + +The system processes blockchain events through multiple stages: + +1. **RAW** - Raw event ingestion from blockchain +2. **DECODED** - Event decoding and parsing +3. **RESOLVER** - Data resolution and enrichment +4. **IPFS-UPLOAD** - Upload images to IPFS and track them in the local DB + +### Supported Contract Versions +- Multivault v2.0 -Useful development commands: +## Monitoring and Observability -- `cargo make start-docker-and-migrate`: Start docker compose and run migrations -- `cargo make clippy`: Run clippy for code quality -- `cargo make fmt`: Run rustfmt for code formatting +### Logging -Check all available commands in `.cargo/makefiles`. +The system includes comprehensive logging capabilities: + +**Features:** +- **Structured JSON Logging**: All services output machine-readable logs +- **Container Logs**: Direct access to service logs via Docker +- **Log Filtering**: Easy filtering by log level and service + +**Benefits:** +- **Debugging**: Quickly find and analyze issues across services +- **Performance Monitoring**: Track service performance and bottlenecks +- **Audit Trail**: Complete visibility into system operations + +**Getting Started:** +1. Start the system: `cargo make start-local` +2. View logs: `docker logs ` +3. Filter logs: `docker logs | grep '"level":"INFO"'` + +**JSON Logging:** +All consumer services output structured JSON logs with the following fields: +- `timestamp`: ISO 8601 timestamp +- `level`: Log level (INFO, WARN, ERROR, DEBUG) +- `fields.message`: Log message content +- `target`: Module path +- `filename`: Source file name +- `line_number`: Line number in source file +- `threadId`: Thread identifier + +**Viewing Logs:** +```bash +# View container logs directly +docker logs decoded_consumer | grep '"level":"INFO"' +docker logs resolver_consumer | grep '"level":"ERROR"' +docker logs ipfs_upload_consumer | grep '"level":"WARN"' +``` ## Troubleshooting @@ -262,8 +319,7 @@ Check all available commands in `.cargo/makefiles`. 1. **Database connection errors**: Ensure PostgreSQL is running and credentials are correct 2. **RPC endpoint issues**: Verify your Alchemy endpoints are valid and have sufficient quota -3. **AWS credential errors**: Check your AWS configuration in `~/.aws/config` -4. **Docker resource limits**: Ensure Docker has sufficient memory and CPU allocation +3. **Docker resource limits**: Ensure Docker has sufficient memory and CPU allocation ### Getting Help @@ -271,7 +327,242 @@ Check all available commands in `.cargo/makefiles`. - Review the [DeepWiki documentation](https://deepwiki.com/0xIntuition/intuition-rs) for detailed technical information - Join the Intuition community for support -## Next Steps +## How to Run Intuition in a Kubernetes Cluster + +A comprehensive Kubernetes-based deployment infrastructure for blockchain indexing and data services, managed with ArgoCD and Terraform. + +### Architecture Overview + +This project deploys a complete blockchain indexing platform on Google Cloud Platform (GCP) using: + +- **GKE Cluster**: Multi-node pool Kubernetes cluster +- **ArgoCD**: GitOps-based continuous deployment +- **Terraform**: Infrastructure as Code for GCP resources +- **Kustomize**: Kubernetes manifest management + +### Core Services + +#### Data Layer +- **TimescaleDB**: Time-series database with PostgreSQL extensions and AI capabilities +- **Indexer Database**: Dedicated database for blockchain indexing operations + +#### Application Services +- **GraphQL Engine**: Hasura GraphQL API for data access +- **IPFS Node**: InterPlanetary File System for decentralized storage +- **Safe Content Service**: Content validation and processing +- **Timescale Vectorizer Worker**: Vector processing for AI/ML workloads +- **Histocrawler**: Historical data crawling and indexing service +- **Image Guard**: Image validation and security service +- **RPC Proxy**: Blockchain RPC request routing and caching + +#### Consumer Services +- **Decoded Consumer**: Blockchain event decoding and processing +- **IPFS Upload Consumer**: IPFS content upload and management +- **Resolver Consumer**: Data resolution and lookup services + +#### Management Tools +- **pgAdmin**: PostgreSQL administration interface +- **Ingress Controller**: Traffic routing and load balancing + +### Infrastructure Components + +#### GKE Cluster Suggested Configuration +- **Region**: `us-west2` +- **Project**: `be-cluster` +- **Network**: Custom VPC with private/public subnets +- **Node Pools**: + - `db-pool`: n2-standard-16 (dedicated for databases) + - `app-pool`: e2-standard-2 (application services) + - `consumer-pool`: custom-4-8192 (data processing) + +#### Storage +- **Persistent Volumes**: GCP Persistent Disk with resizable storage class +- **IPFS Storage**: 50Gi persistent volume for IPFS data +- **Database Storage**: 50Gi for TimescaleDB + +### Project Structure + +``` +gcp-deployment/ +├── apps/ # Kubernetes applications +│ ├── consumers/ # Data processing consumers +│ │ ├── decoded/ # Blockchain event decoder +│ │ ├── ipfs-upload/ # IPFS upload processor +│ │ └── resolver/ # Data resolver service +│ ├── graphql/ # Hasura GraphQL engine +│ ├── histocrawler/ # Historical data crawler +│ ├── image-guard/ # Image validation service +│ ├── indexer-db/ # Indexer database +│ ├── ipfs/ # IPFS node +│ ├── pgadmin/ # PostgreSQL admin +│ ├── rpc-proxy/ # RPC request proxy +│ ├── safe-content/ # Content validation service +│ ├── timescale_db/ # TimescaleDB instance +│ ├── timescale_db_vectorizer/ # Vector processing +│ └── ingress/ # Ingress configuration +├── argocd/ # ArgoCD configuration +│ ├── coreapps/ # Core application definitions +│ ├── namespacedapps/ # Namespace-specific apps +│ ├── projects/ # ArgoCD project definitions +│ └── repos/ # Repository secrets +├── terraform/ # Infrastructure as Code +│ └── debug-gke/ # GKE cluster provisioning +└── test-kustomize/ # Kustomize testing +``` + +### Quick Start + +#### Prerequisites +- Google Cloud SDK +- Terraform >= 1.0 +- kubectl +- ArgoCD CLI + +#### 1. Deploy Infrastructure +```bash +cd terraform/debug-gke +terraform init +terraform plan +terraform apply +``` + +#### 2. Configure ArgoCD +```bash +# Get GKE credentials +gcloud container clusters get-credentials debug-cluster --region us-west2 + +# Install ArgoCD +kubectl create namespace argocd +kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml + +# Apply ArgoCD configuration +kubectl apply -f argocd/ +``` + +#### 3. Deploy Applications +Applications are automatically deployed via ArgoCD GitOps. The system monitors the Git repository and applies changes automatically. + +### Configuration + +#### Environment Variables +Key services require environment-specific configuration: + +- **GraphQL Engine**: Database connection, CORS settings +- **TimescaleDB**: PostgreSQL credentials, AI extensions +- **IPFS**: Storage paths, network configuration +- **Safe Content**: Content validation rules +- **Histocrawler**: Blockchain endpoints, indexing parameters +- **Image Guard**: Image scanning policies, security rules +- **RPC Proxy**: Upstream RPC endpoints, caching configuration +- **Consumers**: Event processing queues, database connections + +#### Secrets Management +Secrets are managed through Kubernetes secrets and external secret providers: +- Database credentials +- API keys +- Service account tokens + +### Monitoring & Observability + +#### Health Checks +- Liveness probes configured for all services +- Readiness probes for database services +- Custom health endpoints for GraphQL and IPFS + +#### Logging +- Structured logging enabled for GraphQL engine +- Query logging for debugging +- WebSocket and HTTP request logging + +### Security + +#### Network Security +- Private GKE cluster with private nodes +- VPC-native networking +- NAT gateway for outbound internet access +- Ingress controller for external access + +#### Access Control +- Workload Identity for GCP service accounts +- Kubernetes RBAC +- ArgoCD project-based access control + +### Development + +#### Local Development +```bash +# Test Kustomize configurations +cd test-kustomize +kubectl kustomize . | kubectl apply --dry-run=client + +# Validate manifests +kubectl kustomize apps/graphql/ | kubectl apply --dry-run=client +``` + +#### Adding New Services +1. Create service directory in `apps/` +2. Add Kubernetes manifests (deployment, service, etc.) +3. Create ArgoCD application definition +4. Update project permissions if needed + +### CI/CD Pipeline + +The deployment follows GitOps principles: +1. Code changes pushed to Git repository +2. ArgoCD detects changes automatically +3. Applications updated in Kubernetes cluster +4. Health checks validate deployment + +### Scaling + +#### Horizontal Scaling +- Application services can scale horizontally via HPA +- Database services use StatefulSets for data persistence +- IPFS and GraphQL support multiple replicas + +#### Vertical Scaling +- Node pools can be resized via Terraform +- Storage volumes support online resizing +- Resource limits configured per service + +### Troubleshooting + +#### Common Issues +1. **Database Connection**: Check TimescaleDB service and secrets +2. **IPFS Storage**: Verify PVC and storage class +3. **GraphQL Health**: Check liveness probe and database connectivity +4. **ArgoCD Sync**: Verify repository access and permissions +5. **Consumer Processing**: Check event queue connectivity and processing status +6. **Histocrawler**: Verify blockchain endpoint accessibility +7. **Image Guard**: Check image scanning service health +8. **RPC Proxy**: Validate upstream RPC endpoint connectivity + +#### Debug Commands +```bash +# Check pod status +kubectl get pods -A + +# View logs +kubectl logs -f deployment/graphql-engine + +# Check ArgoCD applications +argocd app list + +# Validate Terraform state +terraform plan +``` + +### Additional Resources + +- [GKE Documentation](https://cloud.google.com/kubernetes-engine/docs) +- [ArgoCD User Guide](https://argo-cd.readthedocs.io/) +- [TimescaleDB Documentation](https://docs.timescale.com/) +- [Hasura GraphQL Engine](https://hasura.io/docs/) +- [Hasura Documentation](https://hasura.io/docs/) +- [Alchemy Dashboard](https://dashboard.alchemy.com/) +- [Pinata Documentation](https://docs.pinata.cloud/) + +### Next Steps Once your node is running successfully: diff --git a/docs/guides/rust-subnet/_deep-dive.md.backup b/docs/guides/rust-subnet/_deep-dive.md.backup deleted file mode 100644 index df32bac..0000000 --- a/docs/guides/rust-subnet/_deep-dive.md.backup +++ /dev/null @@ -1,213 +0,0 @@ ---- -title: Deep Dive -sidebar_position: 2 ---- - -# Rust Subnet Deep Dive - -This document provides a comprehensive technical exploration of the Intuition Rust Subnet architecture, implementation details, and design decisions. - -## Technical Architecture - -### Core Components - -#### 1. Node Runtime -The node runtime is built using Rust and provides: -- **Memory Safety**: Zero-cost abstractions with compile-time memory management -- **Concurrency**: Multi-threaded execution with safe parallelism -- **Performance**: Native compilation for optimal execution speed - -```rust -// Example node initialization -pub struct SubnetNode { - config: NodeConfig, - state: NetworkState, - consensus: ConsensusEngine, - network: P2PNetwork, -} -``` - -#### 2. Consensus Mechanism -The subnet implements a hybrid consensus model: -- **Block Production**: Proof-of-Stake based selection -- **Finality**: Byzantine Fault Tolerant consensus -- **Fork Choice**: Longest chain with finality checkpoints - -#### 3. State Management -- **Merkle Patricia Trie**: For efficient state verification -- **State Pruning**: Automatic cleanup of historical states -- **Snapshot Sync**: Fast synchronization for new nodes - -### Network Protocol - -#### Message Types -1. **Block Propagation**: New block announcements -2. **Transaction Gossip**: Transaction pool synchronization -3. **State Sync**: State snapshot requests/responses -4. **Consensus Messages**: Voting and finalization - -#### Peer Discovery -- **DHT-based Discovery**: Distributed hash table for peer finding -- **Bootstrap Nodes**: Initial connection points -- **Peer Scoring**: Reputation-based connection management - -### Data Structures - -#### Block Structure -```rust -pub struct Block { - header: BlockHeader, - transactions: Vec, - consensus_data: ConsensusData, -} - -pub struct BlockHeader { - parent_hash: Hash, - state_root: Hash, - transaction_root: Hash, - timestamp: u64, - block_number: u64, -} -``` - -#### Transaction Types -- **Atom Creation**: Register new atomic identities -- **Triple Formation**: Create semantic relationships -- **Signal Attestation**: Add weight to claims -- **State Transition**: Update network parameters - -### Performance Optimizations - -#### Parallel Processing -- **Transaction Validation**: Concurrent verification -- **State Execution**: Parallel state transitions -- **Network I/O**: Async message handling - -#### Caching Strategies -- **Block Cache**: Recent blocks in memory -- **State Cache**: Hot state data -- **Transaction Pool**: Pending transaction management - -### Security Model - -#### Cryptographic Primitives -- **Signature Scheme**: Ed25519 for transaction signing -- **Hash Function**: Blake3 for merkle trees -- **Encryption**: ChaCha20-Poly1305 for network messages - -#### Attack Mitigation -- **Sybil Resistance**: Proof-of-Stake economics -- **DoS Protection**: Rate limiting and peer scoring -- **Fork Prevention**: Finality mechanisms - -## Implementation Details - -### Dependencies -Key Rust crates used in the implementation: -- `tokio`: Async runtime -- `libp2p`: P2P networking -- `rocksdb`: State storage -- `serde`: Serialization -- `parity-scale-codec`: Efficient encoding - -### Database Schema -```sql --- Blocks table -CREATE TABLE blocks ( - number BIGINT PRIMARY KEY, - hash BYTEA UNIQUE NOT NULL, - parent_hash BYTEA NOT NULL, - state_root BYTEA NOT NULL, - timestamp BIGINT NOT NULL -); - --- Transactions table -CREATE TABLE transactions ( - hash BYTEA PRIMARY KEY, - block_number BIGINT REFERENCES blocks(number), - from_address BYTEA NOT NULL, - data BYTEA NOT NULL, - status INTEGER NOT NULL -); -``` - -### RPC Interface -Available JSON-RPC methods: -- `subnet_getBlock`: Retrieve block by hash/number -- `subnet_sendTransaction`: Submit transaction -- `subnet_getState`: Query state data -- `subnet_subscribe`: Event subscriptions - -## Benchmarks - -### Performance Metrics -- **Transaction Throughput**: 10,000+ TPS -- **Block Time**: 2 seconds average -- **Finality Time**: 6 seconds -- **State Sync**: < 1 hour for full sync - -### Resource Requirements -- **CPU**: 4+ cores recommended -- **RAM**: 8GB minimum, 16GB recommended -- **Storage**: 500GB SSD -- **Network**: 100 Mbps stable connection - -## Development Workflow - -### Building from Source -```bash -# Clone repository -git clone https://github.com/0xintuition/rust-subnet - -# Build release binary -cargo build --release - -# Run tests -cargo test --all - -# Run benchmarks -cargo bench -``` - -### Configuration -```toml -# config.toml -[node] -name = "my-subnet-node" -data_dir = "/var/lib/subnet" - -[network] -listen_addr = "/ip4/0.0.0.0/tcp/30333" -boot_nodes = [ - "/dns/boot1.subnet.intuition.systems/tcp/30333/p2p/...", - "/dns/boot2.subnet.intuition.systems/tcp/30333/p2p/..." -] - -[consensus] -validator_key = "path/to/key.json" -``` - -## Future Enhancements - -### Planned Features -- **Cross-subnet Communication**: Interoperability protocols -- **Zero-Knowledge Proofs**: Privacy-preserving transactions -- **Light Client Support**: Mobile and browser nodes -- **Sharding**: Horizontal scaling - -### Research Areas -- **Consensus Optimization**: Improved finality times -- **State Channels**: Off-chain scaling -- **Formal Verification**: Mathematical correctness proofs - -## Resources - -### Documentation -- [Rust Subnet Specification](https://github.com/0xintuition/specs) -- [API Documentation](https://docs.rs/intuition-subnet) -- [Protocol Whitepaper](https://intuition.systems/whitepaper) - -### Community -- [Developer Discord](https://discord.gg/0xintuition) -- [GitHub Discussions](https://github.com/0xintuition/rust-subnet/discussions) -- [Forum](https://forum.intuition.systems) \ No newline at end of file diff --git a/docs/guides/rust-subnet/overview.md b/docs/guides/rust-subnet/overview.md deleted file mode 100644 index f63201f..0000000 --- a/docs/guides/rust-subnet/overview.md +++ /dev/null @@ -1,88 +0,0 @@ ---- -title: Overview -sidebar_position: 1 ---- - -# Rust Subnet Overview - -The Intuition Rust Subnet is a high-performance, decentralized network layer built using Rust that enables efficient data processing and validation across the Intuition ecosystem. - -## What is the Rust Subnet? - -The Rust Subnet is a specialized network component that: -- Processes and validates transactions with high throughput -- Provides a secure execution environment for data operations -- Enables distributed consensus across network participants -- Optimizes for low-latency data availability - -## Key Features - -### High Performance -- Built with Rust for memory safety and performance -- Optimized for parallel processing -- Minimal overhead in transaction processing - -### Decentralized Architecture -- Distributed node network -- Byzantine fault tolerance -- Peer-to-peer communication protocols - -### Security First -- Memory-safe implementation -- Cryptographic verification at every layer -- Regular security audits - -## Use Cases - -### Data Validation -Nodes in the Rust Subnet validate and process data submissions to ensure integrity and accuracy across the network. - -### Transaction Processing -High-throughput transaction processing for all network operations including: -- Atom creation -- Triple formation -- Signal attestations - -### Network Consensus -Participating in the network's consensus mechanism to maintain a consistent state across all nodes. - -## Getting Started - -To participate in the Rust Subnet, you can: - -1. **Run a Node** - Set up and operate your own subnet node -2. **Integrate** - Connect your applications to existing subnet infrastructure -3. **Develop** - Build on top of the subnet's capabilities - -## Architecture Overview - -The Rust Subnet consists of several key components: - -- **Node Software**: Core runtime for subnet participants -- **Consensus Layer**: Mechanism for achieving network agreement -- **Data Layer**: Storage and retrieval of network state -- **Network Layer**: P2P communication protocols -- **API Layer**: Interfaces for external interactions - -## Benefits - -### For Node Operators -- Earn rewards for network participation -- Contribute to network security -- Access to network governance - -### For Developers -- High-performance infrastructure -- Reliable data availability -- Flexible integration options - -### For Users -- Fast transaction finality -- Lower costs through efficiency -- Decentralized trust guarantees - -## Next Steps - -- [Deep Dive](./deep-dive) - Technical architecture and implementation details -- [Run a Node](./run-a-node) - Step-by-step guide to operating a subnet node -- [API Reference](/docs/developer-tools/graphql-api/overview) - Integration documentation \ No newline at end of file diff --git a/docs/guides/rust-subnet/run-a-node.md b/docs/guides/rust-subnet/run-a-node.md deleted file mode 100644 index 6c6ab02..0000000 --- a/docs/guides/rust-subnet/run-a-node.md +++ /dev/null @@ -1,284 +0,0 @@ ---- -title: Run a Node -sidebar_position: 3 ---- - -# Run a Node - -Learn how to set up and run your own Intuition node to participate in the network using the official Rust implementation. - -## Overview - -The `intuition-rs` workspace contains the Rust implementation of Intuition's off-chain backend systems. This implementation provides high performance, memory safety, and reliability for running Intuition nodes and backend services. - -
-

Node Requirements

-

-Running an Intuition node requires Docker, Rust toolchain, and proper environment configuration. This guide provides comprehensive setup instructions for local development and production deployments. -

-
- -## Workspace Components - -The intuition-rs workspace contains the following crates: - -
- -
-

Core Services

-

-CLI: TUI client for node interaction
-Consumer: RAW, DECODED and RESOLVER consumers
-Consumer API: API to re-fetch Atoms -

-
-CLI -Consumer -
-
- -
-

Indexing & Data

-

-Envio Indexer: Index base-sepolia contract events
-Hasura: Migrations and configuration
-Models: Domain models for intuition data -

-
-Indexing -Models -
-
- -
-

Infrastructure

-

-Histoflux: Stream events to SQS queue
-Image Guard: Image validation service
-RPC Proxy: Cache RPC calls and responses -

-
-Streaming -Proxy -
-
- -
-

Event Processing

-

-Substreams Sink: Consume Substreams events for real-time processing and indexing across multiple blockchain networks. -

-
-Events -Real-time -
-
- -
- -## Prerequisites - -### Required Tools - -
-
-
- -
-cargo make: Install with cargo install --force cargo-make -
-
-
- -
-hasura-cli: Required for Hasura operations -
-
-
- -
-Docker: For containerized deployment -
-
-
- -
-Rust toolchain: For building from source -
-
- -### Environment Configuration - -You'll need to set up environment variables for various services. Create a `.env` file based on the `.env.sample` template with the following required variables: - -- `PINATA_GATEWAY_TOKEN`: Get from Pinata -- `PINATA_API_JWT`: Get from Pinata -- `RPC_URL_MAINNET`: Alchemy mainnet endpoint -- `RPC_URL_BASE`: Alchemy Base endpoint -- `AWS_ACCESS_KEY_ID`: Your AWS credentials -- `AWS_SECRET_ACCESS_KEY`: Your AWS credentials -- `HF_TOKEN`: Hugging Face token -- `SUBSTREAMS_API_TOKEN`: Substreams API token -- `HYPERSYNC_TOKEN`: Envio token - -## Installation Options - -### Option 1: Using Published Docker Images (Recommended) - -The fastest way to get started is using the published Docker images: - -```bash -# Clone the repository -git clone https://github.com/0xIntuition/intuition-rs.git -cd intuition-rs - -# Start all services -./start.sh - -# Verify the setup with CLI tool -./cli.sh -``` - -To stop all services: -```bash -./stop.sh -``` - -To restart and clear volumes: -```bash -./restart.sh -``` - -### Option 2: Building from Source - -For development or custom builds: - -```bash -# Clone and setup -git clone https://github.com/0xIntuition/intuition-rs.git -cd intuition-rs -cp .env.sample .env -# Edit .env with your configuration - -# Start with source build -source .env -cargo make start-docker-and-migrate -``` - -### Option 3: Local Development Mode - -For development and testing: - -```bash -# Setup environment -cp .env.sample .env -source .env - -# Run raw consumer (local SQS) -RUST_LOG=info cargo run --bin consumer --features local --mode raw --local - -# Run decoded consumer (local SQS) -RUST_LOG=info cargo run --bin consumer --features local --mode decoded --local - -# Run raw consumer (remote SQS) -RUST_LOG=info cargo run --bin consumer --mode raw - -# Run decoded consumer (remote SQS) -RUST_LOG=info cargo run --bin consumer --mode decoded -``` - -## Kubernetes Deployment (macOS) - -For production deployments on macOS: - -```bash -# Install required tools -brew install minikube -brew install k9s - -# Create secrets from .env file -kubectl create secret generic secrets --from-env-file=.env - -# Start cluster -minikube start - -# Deploy services -kubectl apply -k kube_files/ - -# Restart services if needed -kubectl rollout restart deployment -``` - -## Database Management - -### Running Migrations - -If you need to re-run database migrations: - -```bash -docker compose down -v -docker compose up -d --force-recreate -cargo make migrate-database -``` - -### Using Local Ethereum Node - -Add these to your `.env` file for local development: - -```bash -BASE_MAINNET_RPC_URL=http://geth:8545 -BASE_SEPOLIA_RPC_URL=http://geth:8545 -INTUITION_CONTRACT_ADDRESS=0x04056c43d0498b22f7a0c60d4c3584fb5fa881cc -START_BLOCK=0 -``` - -Create local test data: -```bash -cd integration-tests -npm install -npm run create-predicates -``` - -## Testing - -Run the test suite: - -```bash -cargo nextest run -``` - -## Development Commands - -Useful development commands: - -- `cargo make start-docker-and-migrate`: Start docker compose and run migrations -- `cargo make clippy`: Run clippy for code quality -- `cargo make fmt`: Run rustfmt for code formatting - -Check all available commands in `.cargo/makefiles`. - -## Troubleshooting - -### Common Issues - -1. **Database connection errors**: Ensure PostgreSQL is running and credentials are correct -2. **RPC endpoint issues**: Verify your Alchemy endpoints are valid and have sufficient quota -3. **AWS credential errors**: Check your AWS configuration in `~/.aws/config` -4. **Docker resource limits**: Ensure Docker has sufficient memory and CPU allocation - -### Getting Help - -- Check the [intuition-rs repository](https://github.com/0xIntuition/intuition-rs) for latest updates -- Review the [DeepWiki documentation](https://deepwiki.com/0xIntuition/intuition-rs) for detailed technical information -- Join the Intuition community for support - -## Next Steps - -Once your node is running successfully: - -1. **Monitor the logs** to ensure all services are healthy -2. **Test the CLI tool** to verify data ingestion -3. **Configure monitoring** for production deployments -4. **Join the network** and start contributing to the Intuition ecosystem - -The node implementation is under active development, so check the repository regularly for updates and new features. \ No newline at end of file