diff --git a/src/SUMMARY.md b/src/SUMMARY.md index faef8049..6bcfd2a8 100644 --- a/src/SUMMARY.md +++ b/src/SUMMARY.md @@ -1,85 +1,46 @@ # Summary -- [Getting Started](./getting_started/getting_started.md) - - [Quick Start](./getting_started/quick_start.md) - - [Introduction](./getting_started/intro.md) - - [Kimap and KNS](./getting_started/kimap.md) - - [Design Philosophy](./getting_started/design_philosophy.md) - - [Installation](./getting_started/install.md) - - [Join the Network](./getting_started/login.md) -- [System Components](./system/system_components.md) - - [Processes](./system/processes_overview.md) - - [Process Semantics](./system/process/processes.md) - - [Capability-Based Security](./system/process/capabilities.md) - - [Startup, Spindown, and Crashes](./system/process/startup.md) - - [Extensions](./system/process/extensions.md) - - [WIT APIs](./system/process/wit_apis.md) - - [Networking Protocol](./system/networking_protocol.md) - - [HTTP Server & Client](./system/http_server_and_client.md) - - [Read+Write to Chain](./system/read_and_write_to_chain.md) - - [Files](./system/files.md) - - [Databases](./system/databases.md) - - [Terminal](./system/terminal.md) -- [Process Standard Library](./process_stdlib/overview.md) -- [Kit: Development Tool**kit**](./kit/kit-dev-toolkit.md) - - [Installation](./kit/install.md) - - [`boot-fake-node`](./kit/boot-fake-node.md) - - [`new`](./kit/new.md) - - [`build`](./kit/build.md) - - [`start-package`](./kit/start-package.md) - - [`publish`](./kit/publish.md) - - [`build-start-package`](./kit/build-start-package.md) - - [`remove-package`](./kit/remove-package.md) - - [`chain`](./kit/chain.md) - - [`dev-ui`](./kit/dev-ui.md) - - [`inject-message`](./kit/inject-message.md) - - [`run-tests`](./kit/run-tests.md) - - [`connect`](./kit/connect.md) - - [`reset-cache`](./kit/reset-cache.md) - - [`boot-real-node`](./kit/boot-real-node.md) - - [`view-api`](./kit/view-api.md) -- [My First Kinode Application](./my_first_app/build_and_deploy_an_app.md) - - [Environment Setup](./my_first_app/chapter_1.md) - - [Sending and Responding to a Message](./my_first_app/chapter_2.md) - - [Messaging with More Complex Data Types](./my_first_app/chapter_3.md) - - [Frontend Time](./my_first_app/chapter_4.md) - - [Sharing with the World](./my_first_app/chapter_5.md) -- [In-Depth Guide: Chess App](./chess_app/chess_app.md) - - [Environment Setup](./chess_app/setup.md) - - [Chess Engine](./chess_app/chess_engine.md) - - [Adding a Frontend](./chess_app/frontend.md) - - [Putting Everything Together](./chess_app/putting_everything_together.md) - - [Extension: Chat](./chess_app/chat.md) -- [Cookbook (Handy Recipes)](./cookbook/cookbook.md) - - [Saving State](./cookbook/save_state.md) - - [Managing Child Processes](./cookbook/manage_child_processes.md) - - [Publishing a Website or Web App](./cookbook/publish_to_web.md) - - [Simple File Transfer Guide](./cookbook/file_transfer.md) - - [Intro to Web UI with File Transfer](./cookbook/file_transfer_ui.md) - - [Writing and Running Scripts](./cookbook/writing_scripts.md) - - [Reading Data from ETH](./cookbook/reading_data_from_eth.md) - - [Writing Data to ETH](./cookbook/writing_data_to_eth.md) - - [Creating and Using Capabilities](./cookbook/creating_and_using_capabilities.md) - - [Managing Contacts](./cookbook/managing_contacts.md) - - [Use ZK proofs with SP1](./cookbook/zk_with_sp1.md) - - [Talking to the Outside World](./cookbook/talking_to_the_outside_world.md) - - [Exporting & Importing Package APIs](./cookbook/package_apis.md) - - [Exporting Workers in Package APIs](./cookbook/package_apis_workers.md) -- [API Reference](./apis/api_reference.md) - - [ETH Provider API](./apis/eth_provider.md) - - [Frontend/UI Development](./apis/frontend_development.md) - - [HTTP API](./apis/http_authentication.md) - - [HTTP Client API](./apis/http_client.md) - - [HTTP Server API](./apis/http_server.md) - - [Kernel API](./apis/kernel.md) - - [`kinode.wit`](./apis/kinode_wit.md) - - [KV API](./apis/kv.md) - - [Net API](./apis/net.md) - - [SQLite API](./apis/sqlite.md) - - [Terminal API](./apis/terminal.md) - - [Timer API](./apis/timer.md) - - [VFS API](./apis/vfs.md) - - [WebSocket API](./apis/websocket.md) -- [Hosted Nodes User Guide](./hosted-nodes.md) -- [Audits and Security](./audits-and-security.md) -- [Glossary](./glossary.md) +- Getting Started + - [Quick Start](./getting_started/quick_start.md) - Step-by-step guide for running two fake Kinodes and building a simple chat application between them. + - [Introduction](./getting_started/intro.md) - Overview of Kinode OS, its core primitives for P2P app development, and system architecture. + - Processes + - [Process Semantics](./system/process/processes.md) - Core concepts of Kinode processes, messaging between them, and state management. + - [Capability-Based Security](./system/process/capabilities.md) - Security model using capability tokens for process permissions and access control. + - [Startup, Spindown, and Crashes](./system/process/startup.md) - Process lifecycle management including initialization, state persistence, and exit behaviors. + - [WIT APIs](./system/process/wit_apis.md) - How processes use WebAssembly Interface Types for cross-language API definitions. +- Kit: Development Tool**kit** + - [`boot-fake-node`](./kit/boot-fake-node.md) - Starts a development node on fake chain for testing, with pre-seeded contracts. + - [`new`](./kit/new.md) - Creates a new Kinode package from templates, supporting different languages and UI options. + - [`build`](./kit/build.md) - Compiles package processes to WebAssembly and prepares deployment artifacts. + - [`start-package`](./kit/start-package.md) - Installs and launches a built package on a target Kinode. + - [`publish`](./kit/publish.md) - Publishes or updates package entries in the Kimap distribution system. + - [`build-start-package`](./kit/build-start-package.md) - Combines build and start-package steps for quicker deployment. + - [`remove-package`](./kit/remove-package.md) - Uninstalls a package from a running Kinode. + - [`chain`](./kit/chain.md) - Launches local blockchain with Foundry's Anvil for development. + - [`dev-ui`](./kit/dev-ui.md) - Starts development server with hot reloading for UI development. + - [`inject-message`](./kit/inject-message.md) - Sends test messages to processes for development and debugging. + - [`run-tests`](./kit/run-tests.md) - Executes test suites defined in TOML configuration files. + - [`connect`](./kit/connect.md) - Creates SSH tunnels to remote nodes for development access. + - [`boot-real-node`](./kit/boot-real-node.md) - Launches a node connected to live Kinode network. +- My First Kinode Application + - [Environment Setup](./my_first_app/chapter_1.md) - Setup development environment, create a package template, and explore structure. + - [Sending and Responding to a Message](./my_first_app/chapter_2.md) - Learn about process initialization, message sending and response handling. + - [Messaging with More Complex Data Types](./my_first_app/chapter_3.md) - Implement complex data types with Serde and handle process lifecycle. + - [Frontend Time](./my_first_app/chapter_4.md) - Add HTTP handling, serve a static frontend, and create homepage widgets. + - [Sharing with the World](./my_first_app/chapter_5.md) - Package and publish your application to the Kinode network. +- Cookbook (Handy Recipes) + - [Saving State](./cookbook/save_state.md) - Use built-in state persistence functions to maintain process data between restarts. + - [Managing Child Processes](./cookbook/manage_child_processes.md) - Create and manage child processes for task isolation and parallel execution. + - [Publishing a Website or Web App](./cookbook/publish_to_web.md) - Serve static assets and web applications through HTTP server bindings. + - [Simple File Transfer Guide](./cookbook/file_transfer.md) - Implement file transfer functionality using VFS and worker processes. + - [Intro to Web UI with File Transfer](./cookbook/file_transfer_ui.md) - Build a React-based UI for the file transfer system using Vite. + - [Writing and Running Scripts](./cookbook/writing_scripts.md) - Create and use command-line scripts as processes with arguments. + - [Reading Data from ETH](./cookbook/reading_data_from_eth.md) - Query Ethereum blockchain data using the provider system. + - [Writing Data to ETH](./cookbook/writing_data_to_eth.md) - Write to Ethereum using contracts and transaction signing. + - [Creating and Using Capabilities](./cookbook/creating_and_using_capabilities.md) - Implement custom capability checking for process security. + - [Managing Contacts](./cookbook/managing_contacts.md) - Use the contacts system primitive to manage node identities. + - [Talking to the Outside World](./cookbook/talking_to_the_outside_world.md) - Communicate with external systems and resources. + - [Exporting & Importing Package APIs](./cookbook/package_apis.md) - Share and use package APIs using WIT interfaces. + - [Exporting Workers in Package APIs](./cookbook/package_apis_workers.md) - Create reusable worker processes in package APIs. +- [Hosted Nodes User Guide](./hosted-nodes.md) - Managing hosted Kinodes, accessing terminals via SSH, and development workflows. +- [Glossary](./glossary.md) - Definitions and explanations of key Kinode technical terms and concepts. diff --git a/src/apis/api_reference.md b/src/apis/api_reference.md deleted file mode 100644 index 83557906..00000000 --- a/src/apis/api_reference.md +++ /dev/null @@ -1,7 +0,0 @@ -# APIs Overview - -The APIs documented in this section refer to Kinode runtime modules. -Specifically, they are the patterns of Requests and Responses that an app can use to interact with these modules. - -**Note: App developers usually should not use these APIs directly. -Most standard use-cases are better served by using functions in the [Process Standard Library](../process_stdlib/overview.md).** diff --git a/src/apis/eth_provider.md b/src/apis/eth_provider.md deleted file mode 100644 index a0ee3ee7..00000000 --- a/src/apis/eth_provider.md +++ /dev/null @@ -1,239 +0,0 @@ -# ETH Provider API - -**Note: Most processes will not use this API directly. Instead, they will use the `eth` portion of the[`process_lib`](../process_stdlib/overview.md) library, which papers over this API and provides a set of types and functions which are much easier to natively use. -This is mostly useful for re-implementing this module in a different client or performing niche actions unsupported by the library.** - -Processes can send two kinds of requests to `eth:distro:sys`: `EthAction` and `EthConfigAction`. -The former only requires the capability to message the process, while the latter requires the root capability issued by `eth:distro:sys`. -Most processes will only need to send `EthAction` requests. - -```rust -/// The Action and Request type that can be made to eth:distro:sys. Any process with messaging -/// capabilities can send this action to the eth provider. -/// -/// Will be serialized and deserialized using [`serde_json::to_vec`] and [`serde_json::from_slice`]. -#[derive(Clone, Debug, Serialize, Deserialize)] -pub enum EthAction { - /// Subscribe to logs with a custom filter. ID is to be used to unsubscribe. - /// Logs come in as JSON value which can be parsed to [`alloy::rpc::types::eth::pubsub::SubscriptionResult`] - SubscribeLogs { - sub_id: u64, - chain_id: u64, - kind: SubscriptionKind, - params: serde_json::Value, - }, - /// Kill a SubscribeLogs subscription of a given ID, to stop getting updates. - UnsubscribeLogs(u64), - /// Raw request. Used by kinode_process_lib. - Request { - chain_id: u64, - method: String, - params: serde_json::Value, - }, -} - -/// Subscription kind. Pulled directly from alloy (https://github.com/alloy-rs/alloy). -/// Why? Because alloy is not yet 1.0 and the types in this interface must be stable. -/// If alloy SubscriptionKind changes, we can implement a transition function in runtime -/// for this type. -#[derive(Clone, Copy, Debug, PartialEq, Eq, Hash, Serialize, Deserialize)] -#[serde(deny_unknown_fields)] -#[serde(rename_all = "camelCase")] -pub enum SubscriptionKind { - /// New block headers subscription. - /// - /// Fires a notification each time a new header is appended to the chain, including chain - /// reorganizations. In case of a chain reorganization the subscription will emit all new - /// headers for the new chain. Therefore the subscription can emit multiple headers on the same - /// height. - NewHeads, - /// Logs subscription. - /// - /// Returns logs that are included in new imported blocks and match the given filter criteria. - /// In case of a chain reorganization previous sent logs that are on the old chain will be - /// resent with the removed property set to true. Logs from transactions that ended up in the - /// new chain are emitted. Therefore, a subscription can emit logs for the same transaction - /// multiple times. - Logs, - /// New Pending Transactions subscription. - /// - /// Returns the hash or full tx for all transactions that are added to the pending state and - /// are signed with a key that is available in the node. When a transaction that was - /// previously part of the canonical chain isn't part of the new canonical chain after a - /// reorganization its again emitted. - NewPendingTransactions, - /// Node syncing status subscription. - /// - /// Indicates when the node starts or stops synchronizing. The result can either be a boolean - /// indicating that the synchronization has started (true), finished (false) or an object with - /// various progress indicators. - Syncing, -} -``` - -The `Request` containing this action should always expect a response, since every action variant triggers one and relies on it to be useful. -The ETH provider will respond with the following type: - -```rust -/// The Response body type which a process will get from requesting -/// with an [`EthAction`] will be of this type, serialized and deserialized -/// using [`serde_json::to_vec`] and [`serde_json::from_slice`]. -/// -/// In the case of an [`EthAction::SubscribeLogs`] request, the response will indicate if -/// the subscription was successfully created or not. -#[derive(Debug, Serialize, Deserialize, Clone)] -pub enum EthResponse { - Ok, - Response(serde_json::Value), - Err(EthError), -} - -#[derive(Clone, Debug, Serialize, Deserialize)] -pub enum EthError { - /// RPC provider returned an error. - /// Can be parsed to [`alloy::rpc::json_rpc::ErrorPayload`] - RpcError(serde_json::Value), - /// provider module cannot parse message - MalformedRequest, - /// No RPC provider for the chain - NoRpcForChain, - /// Subscription closed - SubscriptionClosed(u64), - /// Invalid method - InvalidMethod(String), - /// Invalid parameters - InvalidParams, - /// Permission denied - PermissionDenied, - /// RPC timed out - RpcTimeout, - /// RPC gave garbage back - RpcMalformedResponse, -} -``` - -The `EthAction::SubscribeLogs` request will receive a response of `EthResponse::Ok` if the subscription was successfully created, or `EthResponse::Err(EthError)` if it was not. -Then, after the subscription is successfully created, the process will receive *Requests* from `eth:distro:sys` containing subscription updates. -That request will look like this: - -```rust -/// Incoming `Request` containing subscription updates or errors that processes will receive. -/// Can deserialize all incoming requests from eth:distro:sys to this type. -/// -/// Will be serialized and deserialized using `serde_json::to_vec` and `serde_json::from_slice`. -pub type EthSubResult = Result; - -/// Incoming type for successful subscription updates. -#[derive(Clone, Debug, Serialize, Deserialize)] -pub struct EthSub { - pub id: u64, - /// can be parsed to [`alloy::rpc::types::eth::pubsub::SubscriptionResult`] - pub result: serde_json::Value, -} - -/// If your subscription is closed unexpectedly, you will receive this. -#[derive(Clone, Debug, Serialize, Deserialize)] -pub struct EthSubError { - pub id: u64, - pub error: String, -} -``` - -Again, for most processes, this is the entire API. -The `eth` portion of the `process_lib` library will handle the serialization and deserialization of these types and provide a set of functions and types that are much easier to use. - -### Config API - -If a process has the `root` capability from `eth:distro:sys`, it can send `EthConfigAction` requests. -These actions are used to adjust the underlying providers and relays used by the module, and its settings regarding acting as a relayer for other nodes (public/private/granular etc). - -The configuration of the ETH provider is persisted across two files named `.eth_providers` and `.eth_access_settings` in the node's home directory. `.eth_access_settings` is only created if the configuration is set past the default (private, empty allow/deny lists). - -```rust -/// The action type used for configuring eth:distro:sys. Only processes which have the "root" -/// capability from eth:distro:sys can successfully send this action. -#[derive(Debug, Serialize, Deserialize)] -pub enum EthConfigAction { - /// Add a new provider to the list of providers. - AddProvider(ProviderConfig), - /// Remove a provider from the list of providers. - /// The tuple is (chain_id, node_id/rpc_url). - RemoveProvider((u64, String)), - /// make our provider public - SetPublic, - /// make our provider not-public - SetPrivate, - /// add node to whitelist on a provider - AllowNode(String), - /// remove node from whitelist on a provider - UnallowNode(String), - /// add node to blacklist on a provider - DenyNode(String), - /// remove node from blacklist on a provider - UndenyNode(String), - /// Set the list of providers to a new list. - /// Replaces all existing saved provider configs. - SetProviders(SavedConfigs), - /// Get the list of current providers as a [`SavedConfigs`] object. - GetProviders, - /// Get the current access settings. - GetAccessSettings, - /// Get the state of calls and subscriptions. Used for debugging. - GetState, -} - -pub type SavedConfigs = HashSet; - -/// Provider config. Can currently be a node or a ws provider instance. -#[derive(Clone, Debug, Deserialize, Serialize, Hash, Eq, PartialEq)] -pub struct ProviderConfig { - pub chain_id: u64, - pub trusted: bool, - pub provider: NodeOrRpcUrl, -} - -#[derive(Clone, Debug, Deserialize, Serialize, Hash, Eq, PartialEq)] -pub enum NodeOrRpcUrl { - Node { - kns_update: crate::core::KnsUpdate, - use_as_provider: bool, // false for just-routers inside saved config - }, - RpcUrl(String), -} -``` - -`EthConfigAction` requests should always expect a response. The response body will look like this: -```rust -/// Response type from an [`EthConfigAction`] request. -#[derive(Debug, Serialize, Deserialize)] -pub enum EthConfigResponse { - Ok, - /// Response from a GetProviders request. - /// Note the [`crate::core::KnsUpdate`] will only have the correct `name` field. - /// The rest of the Update is not saved in this module. - Providers(SavedConfigs), - /// Response from a GetAccessSettings request. - AccessSettings(AccessSettings), - /// Permission denied due to missing capability - PermissionDenied, - /// Response from a GetState request - State { - active_subscriptions: HashMap>>, // None if local, Some(node_provider_name) if remote - outstanding_requests: HashSet, - }, -} - -/// Settings for our ETH provider -#[derive(Clone, Debug, Deserialize, Serialize)] -pub struct AccessSettings { - pub public: bool, // whether or not other nodes can access through us - pub allow: HashSet, // whitelist for access (only used if public == false) - pub deny: HashSet, // blacklist for access (always used) -} -``` - -A successful `GetProviders` request will receive a response of `EthConfigResponse::Providers(SavedConfigs)`, and a successful `GetAccessSettings` request will receive a response of `EthConfigResponse::AccessSettings(AccessSettings)`. -The other requests will receive a response of `EthConfigResponse::Ok` if they were successful, or `EthConfigResponse::PermissionDenied` if they were not. - -All of these types are serialized to a JSON string via `serde_json` and stored as bytes in the request/response body. -[The source code for this API can be found in the `eth` section of the Kinode runtime library.](https://github.com/kinode-dao/kinode/blob/main/lib/src/eth.rs) diff --git a/src/apis/frontend_development.md b/src/apis/frontend_development.md deleted file mode 100644 index 9110ca94..00000000 --- a/src/apis/frontend_development.md +++ /dev/null @@ -1,110 +0,0 @@ -# Frontend/UI Development - -Kinode can easily serve any webpage or web app developed with normal libraries and frameworks. - -There are some specific endpoints, JS libraries, and `process_lib` functions that are helpful for doing frontend development. - -There are also some important considerations and "gotchas" that can happen when trying to do frontend development. - -Kinode can serve a website or web app just like any HTTP webserver. -The preferred method is to upload your static assets on install by placing them in the `pkg` folder. -By convention, `kit` bundles these assets into a directory inside `pkg` called `ui`, but you can call it anything. -You **must** place your `index.html` in the top-level folder. -The structure should look like this: - -``` -my-package -└── pkg - └── ui (can have any name) - ├── assets (can have any name) - └── index.html -``` - -## /our & /our.js - -Every node has both `/our` and `/our.js` endpoints. -`/our` returns the node's ID as a string like `'my-node'`. -`/our.js` returns a JS script that sets `window.our = { node: 'my-node' }`. -By convention, you can then easily set `window.our.process` either in your UI code or from a process-specific endpoint. -The frontend would then have `window.our` set for use in your code. - -## Serving a Website - -The simplest way to serve a UI is using the `serve_ui` function from `process_lib`: - -``` -serve_ui(&our, "ui", true, false, vec!["/"]).unwrap(); -``` - -This will serve the `index.html` in the specified folder (here, `"ui"`) at the home path of your process. -If your process is called `my-process:my-package:template.os` and your Kinode is running locally on port 8080, -then the UI will be served at `http://localhost:8080/my-process:my-package:template.os`. - -`serve_ui` takes five arguments: our `&Address`, the name of the folder that contains your frontend, whether the UI requires authentication, whether the UI is local-only, and the path(s) on which to serve the UI (usually `["/"]`). - -## Development without kit - -The `kit` UI template uses the React framework compiled with Vite. -But you can use any UI framework as long as it generates an `index.html` and associated assets. -To make development easy, your setup should support a base URL and http proxying. - -### Base URL - -All processes on Kinode are namespaced by process name in the standard format of `process:package:publisher`. -So if your process is called `my-process:my-package:template.os`, then your process can only bind HTTP paths that start with `/my-process:my-package:template.os`. -Your UI should be developed and compiled with the base URL set to the appropriate process path. - -#### Vite - -In `vite.config.ts` (or `.js`) set `base` to your full process name, i.e. -``` -base: '/my-process:my-package:template.os' -``` - -#### Create React App - -In `package.json` set `homepage` to your full process name, i.e. -``` -homepage: '/my-process:my-package:template.os' -``` - -### Proxying HTTP Requests - -In UI development, it is very useful to proxy HTTP requests from the in-dev UI to your Kinode. -Below are some examples. - -#### Vite - -Follow the `server` entry in the [kit template](https://github.com/kinode-dao/kit/blob/master/src/new/templates/ui/chat/ui/vite.config.ts#L31-L47) in your own `vite.config.ts`. - -#### Create React App - -In `package.json` set `proxy` to your Kinode's URL, i.e. -``` -proxy: 'http://localhost:8080' -``` - -### Making HTTP Requests - -When making HTTP requests in your UI, make sure to prepend your base URL to the request. -For example, if your base URL is `/my-process:my-package:template.os`, then a `fetch` request to `/my-endpoint` would look like this: - -``` -fetch('/my-process:my-package:template.os/my-endpoint') -``` - -## Local Development and "gotchas" - -When developing a frontend locally, particularly with a framework like React, it is helpful to proxy HTTP requests through to your node. -The `vite.config.ts` provided in the `kit` template has code to handle this proxying. - -It is important to remember that the frontend will always have the process name as the first part of the HTTP path, -so all HTTP requests and file sources should start with the process name. -Many frontend JavaScript frameworks will handle this by default if you set the `base` or `baseUrl` properly. - -In development, websocket connections can be more annoying to proxy, so it is often easier to simply hardcode the URL if in development. -See your framework documentation for how to check if you are in dev or prod. -The `kit` template already handles this for you. - -Developing against a remote node is simple, you just have to change the proxy target in `vite.config.ts` to the URL of your node. -By default the template will target `http://localhost:8080`. diff --git a/src/apis/http_authentication.md b/src/apis/http_authentication.md deleted file mode 100644 index 1ec48650..00000000 --- a/src/apis/http_authentication.md +++ /dev/null @@ -1,137 +0,0 @@ -# HTTP API - -Incoming HTTP requests are handled by a Rust `warp` server in the core `http-server:distro:sys` process. -This process handles binding (registering) routes, simple JWT-based authentication, and serving a `/login` page if auth is missing. - -## Binding (Registering) HTTP Paths - -Any process that you build can bind (register) any number of HTTP paths with `http-server`. -Every path that you bind will be automatically prepended with the current process' ID. -For example, bind the route `/messages` within a process called `main:my-package:myname.os` like so: - -```rust -use kinode_process_lib::{http::bind_http_path}; - -bind_http_path("/messages", true, false).unwrap(); -``` - -Now, any HTTP requests to your node at `/main:my-package:myname.os/messages` will be routed to your process. - -The other two parameters to `bind_http_path` are `authenticated: bool` and `local_only: bool`. -`authenticated` means that `http-server` will check for an auth cookie (set at login/registration), and `local_only` means that `http-server` will only allow requests that come from `localhost`. - -Incoming HTTP requests will come via `http-server` and have both a `body` and a `lazy_load_blob`. -The `lazy_load_blob` is the HTTP request body itself, and the `body` is an `IncomingHttpRequest`: - -```rust -pub struct IncomingHttpRequest { - /// will parse to SocketAddr - pub source_socket_addr: Option, - /// will parse to http::Method - pub method: String, - /// will parse to url::Url - pub url: String, - /// the matching path that was bound - pub bound_path: String, - /// will parse to http::HeaderMap - pub headers: HashMap, - pub url_params: HashMap, - pub query_params: HashMap, -} -``` - -Note that `url` is the host and full path of the original HTTP request that came in. -`bound_path` is the matching path that was originally bound in `http-server`. - -## Handling HTTP Requests - -Usually, you will want to: -1) determine if an incoming request is a HTTP request. -2) figure out what kind of `IncomingHttpRequest` it is. -3) handle the request based on the path and method. - -Here is an example from the `kit` UI-enabled chat app template that handles both `POST` and `GET` requests to the `/messages` path: - -```rust -fn handle_http-server_request( - our: &Address, - message_archive: &mut MessageArchive, - source: &Address, - body: &[u8], - our_channel_id: &mut u32, -) -> anyhow::Result<()> { - let Ok(server_request) = serde_json::from_slice::(body) else { - // Fail silently if we can't parse the request - return Ok(()); - }; - - match server_request { - - // IMPORTANT BIT: - - HttpServerRequest::Http(IncomingHttpRequest { method, url, .. }) => { - // Check the path - if url.ends_with(&format!("{}{}", our.process.to_string(), "/messages")) { - // Match on the HTTP method - match method.as_str() { - // Get all messages - "GET" => { - let mut headers = HashMap::new(); - headers.insert("Content-Type".to_string(), "application/json".to_string()); - - send_response( - StatusCode::OK, - Some(headers), - serde_json::to_vec(&ChatResponse::History { - messages: message_archive.clone(), - }) - .unwrap(), - )?; - } - // Send a message - "POST" => { - print_to_terminal(0, "1"); - let Some(blob) = get_blob() else { - return Ok(()); - }; - print_to_terminal(0, "2"); - handle_chat_request( - our, - message_archive, - our_channel_id, - source, - &blob.bytes, - true, - )?; - - // Send an http response via the http server - send_response(StatusCode::CREATED, None, vec![])?; - } - _ => { - // Method not allowed - send_response(StatusCode::METHOD_NOT_ALLOWED, None, vec![])?; - } - } - } - } - - _ => {} - }; - - Ok(()) -} -``` - -`send_response` is a `process_lib` function that sends an HTTP response. The function signature is as follows: - -```rust -pub fn send_response( - status: StatusCode, - headers: Option>, - body: Vec, -) -> anyhow::Result<()> -``` - -## App-Specific Authentication - -COMING SOON diff --git a/src/apis/http_client.md b/src/apis/http_client.md deleted file mode 100644 index 8aa1582f..00000000 --- a/src/apis/http_client.md +++ /dev/null @@ -1,117 +0,0 @@ -# HTTP Client API - -See also: [docs.rs for HTTP Client part of `process_lib`](https://docs.rs/kinode_process_lib/latest/kinode_process_lib/http/index.html). - -**Note: Most processes will not use this API directly. Instead, they will use the [`process_lib`](../process_stdlib/overview.md) library, which papers over this API and provides a set of types and functions which are much easier to natively use. This is mostly useful for re-implementing this module in a different client or performing niche actions unsupported by the library.** - -The HTTP client is used for sending and receiving HTTP requests and responses. -It is also used for connecting to a websocket endpoint as a client. -From a process, you may send an `HttpClientAction` to the `http-client:distro:sys` process. -The action must be serialized to JSON and sent in the `body` of a request. -`HttpClientAction` is an `enum` type that includes both HTTP and websocket actions. - -```rust -/// Request type sent to the `http-client:distro:sys` service. -/// -/// Always serialized/deserialized as JSON. -#[derive(Clone, Debug, Serialize, Deserialize)] -pub enum HttpClientAction { - Http(OutgoingHttpRequest), - WebSocketOpen { - url: String, - headers: HashMap, - channel_id: u32, - }, - WebSocketPush { - channel_id: u32, - message_type: WsMessageType, - }, - WebSocketClose { - channel_id: u32, - }, -} -``` - -The websocket actions, `WebSocketOpen`, `WebSocketPush`, and `WebSocketClose` all require a `channel_id`. -The `channel_id` is used to identify the connection, and must be unique for each connection from a given process. -Two or more connections can have the same `channel_id` if they are from different processes. -`OutgoingHttpRequest` is used to send an HTTP request. - -```rust -/// HTTP Request type that can be shared over Wasm boundary to apps. -/// This is the one you send to the `http-client:distro:sys` service. -/// -/// BODY is stored in the lazy_load_blob, as bytes -/// -/// TIMEOUT is stored in the message expect_response value -#[derive(Clone, Debug, Serialize, Deserialize)] -pub struct OutgoingHttpRequest { - /// must parse to [`http::Method`] - pub method: String, - /// must parse to [`http::Version`] - pub version: Option, - /// must parse to [`url::Url`] - pub url: String, - pub headers: HashMap, -} -``` - -All requests to the HTTP client will receive a response of `Result` serialized to JSON. -The process can await or ignore this response, although the desired information will be in the `HttpClientResponse` if the request was successful. -An HTTP request will have an `HttpResponse` defined in the [`http-server`](./http_server.md) module. -A websocket request (open, push, close) will simply respond with a `HttpClientResponse::WebSocketAck`. - -```rust -/// Response type received from the `http-client:distro:sys` service after -/// sending a successful [`HttpClientAction`] to it. -#[derive(Debug, Serialize, Deserialize)] -pub enum HttpClientResponse { - Http(HttpResponse), - WebSocketAck, -} -``` - -```rust -#[derive(Error, Debug, Serialize, Deserialize)] -pub enum HttpClientError { - // HTTP errors - #[error("http-client: request is not valid HttpClientRequest: {req}.")] - BadRequest { req: String }, - #[error("http-client: http method not supported: {method}.")] - BadMethod { method: String }, - #[error("http-client: url could not be parsed: {url}.")] - BadUrl { url: String }, - #[error("http-client: http version not supported: {version}.")] - BadVersion { version: String }, - #[error("http-client: failed to execute request {error}.")] - RequestFailed { error: String }, - - // WebSocket errors - #[error("http-client: failed to open connection {url}.")] - WsOpenFailed { url: String }, - #[error("http-client: failed to send message {req}.")] - WsPushFailed { req: String }, - #[error("http-client: failed to close connection {channel_id}.")] - WsCloseFailed { channel_id: u32 }, -} -``` - -The HTTP client can also receive external websocket messages over an active client connection. -These incoming websocket messages are processed and sent as `HttpClientRequest` to the process that originally opened the websocket. -The message itself is accessible with `get_blob()`. - -```rust -/// Request that comes from an open WebSocket client connection in the -/// `http-client:distro:sys` service. Be prepared to receive these after -/// using a [`HttpClientAction::WebSocketOpen`] to open a connection. -#[derive(Clone, Copy, Debug, Serialize, Deserialize)] -pub enum HttpClientRequest { - WebSocketPush { - channel_id: u32, - message_type: WsMessageType, - }, - WebSocketClose { - channel_id: u32, - }, -} -``` diff --git a/src/apis/http_server.md b/src/apis/http_server.md deleted file mode 100644 index 4b3402fb..00000000 --- a/src/apis/http_server.md +++ /dev/null @@ -1,205 +0,0 @@ -# HTTP Server API - -See also: [docs.rs for HTTP Server part of `process_lib`](https://docs.rs/kinode_process_lib/latest/kinode_process_lib/http/index.html). - -**Note: Most processes will not use this API directly. Instead, they will use the [`process_lib`](../process_stdlib/overview.md) library, which papers over this API and provides a set of types and functions which are much easier to natively use. This is mostly useful for re-implementing this module in a different client or performing niche actions unsupported by the library.** - -The HTTP server is used by sending and receiving requests and responses. -From a process, you may send an `HttpServerAction` to the `http-server:distro:sys` process. - -```rust -/// Request type sent to `http-server:distro:sys` in order to configure it. -/// -/// If a response is expected, all actions will return a Response -/// with the shape `Result<(), HttpServerActionError>` serialized to JSON. -#[derive(Clone, Debug, Serialize, Deserialize)] -pub enum HttpServerAction { - /// Bind expects a lazy_load_blob if and only if `cache` is TRUE. The lazy_load_blob should - /// be the static file to serve at this path. - Bind { - path: String, - /// Set whether the HTTP request needs a valid login cookie, AKA, whether - /// the user needs to be logged in to access this path. - authenticated: bool, - /// Set whether requests can be fielded from anywhere, or only the loopback address. - local_only: bool, - /// Set whether to bind the lazy_load_blob statically to this path. That is, take the - /// lazy_load_blob bytes and serve them as the response to any request to this path. - cache: bool, - }, - /// SecureBind expects a lazy_load_blob if and only if `cache` is TRUE. The lazy_load_blob should - /// be the static file to serve at this path. - /// - /// SecureBind is the same as Bind, except that it forces requests to be made from - /// the unique subdomain of the process that bound the path. These requests are - /// *always* authenticated, and *never* local_only. The purpose of SecureBind is to - /// serve elements of an app frontend or API in an exclusive manner, such that other - /// apps installed on this node cannot access them. Since the subdomain is unique, it - /// will require the user to be logged in separately to the general domain authentication. - SecureBind { - path: String, - /// Set whether to bind the lazy_load_blob statically to this path. That is, take the - /// lazy_load_blob bytes and serve them as the response to any request to this path. - cache: bool, - }, - /// Unbind a previously-bound HTTP path - Unbind { path: String }, - /// Bind a path to receive incoming WebSocket connections. - /// Doesn't need a cache since does not serve assets. - WebSocketBind { - path: String, - authenticated: bool, - extension: bool, - }, - /// SecureBind is the same as Bind, except that it forces new connections to be made - /// from the unique subdomain of the process that bound the path. These are *always* - /// authenticated. Since the subdomain is unique, it will require the user to be - /// logged in separately to the general domain authentication. - WebSocketSecureBind { path: String, extension: bool }, - /// Unbind a previously-bound WebSocket path - WebSocketUnbind { path: String }, - /// Processes will RECEIVE this kind of request when a client connects to them. - /// If a process does not want this websocket open, they should issue a *request* - /// containing a [`HttpServerAction::WebSocketClose`] message and this channel ID. - WebSocketOpen { path: String, channel_id: u32 }, - /// When sent, expects a lazy_load_blob containing the WebSocket message bytes to send. - WebSocketPush { - channel_id: u32, - message_type: WsMessageType, - }, - /// When sent, expects a `lazy_load_blob` containing the WebSocket message bytes to send. - /// Modifies the `lazy_load_blob` by placing into `WebSocketExtPushData` with id taken from - /// this `KernelMessage` and `kinode_message_type` set to `desired_reply_type`. - WebSocketExtPushOutgoing { - channel_id: u32, - message_type: WsMessageType, - desired_reply_type: MessageType, - }, - /// For communicating with the ext. - /// Kinode's http-server sends this to the ext after receiving `WebSocketExtPushOutgoing`. - /// Upon receiving reply with this type from ext, http-server parses, setting: - /// * id as given, - /// * message type as given (Request or Response), - /// * body as HttpServerRequest::WebSocketPush, - /// * blob as given. - WebSocketExtPushData { - id: u64, - kinode_message_type: MessageType, - blob: Vec, - }, - /// Sending will close a socket the process controls. - WebSocketClose(u32), -} - -/// The possible message types for [`HttpServerRequest::WebSocketPush`]. -/// Ping and Pong are limited to 125 bytes by the WebSockets protocol. -/// Text will be sent as a Text frame, with the lazy_load_blob bytes -/// being the UTF-8 encoding of the string. Binary will be sent as a -/// Binary frame containing the unmodified lazy_load_blob bytes. -#[derive(Clone, Copy, Debug, PartialEq, Serialize, Deserialize)] -pub enum WsMessageType { - Text, - Binary, - Ping, - Pong, - Close, -} -``` - -This struct must be serialized to JSON and placed in the `body` of a requests to `http-server:distro:sys`. -For actions that take additional data, such as `Bind` and `WebSocketPush`, it is placed in the `lazy_load_blob` of that request. - -After handling such a request, the HTTP server will always give a response of the shape `Result<(), HttpServerError>`, also serialized to JSON. This can be ignored, or awaited and handled. - -```rust -/// Part of the Response type issued by `http-server:distro:sys` -#[derive(Error, Debug, Serialize, Deserialize)] -pub enum HttpServerError { - #[error("request could not be parsed to HttpServerAction: {req}.")] - BadRequest { req: String }, - #[error("action expected blob")] - NoBlob, - #[error("path binding error: {error}")] - PathBindError { error: String }, - #[error("WebSocket error: {error}")] - WebSocketPushError { error: String }, -} -``` - -Certain actions will cause the HTTP server to send requests to the process in the future. -If a process uses `Bind` or `SecureBind`, that process will need to field future requests from the HTTP server. The server will handle incoming HTTP protocol messages to that path by sending an `HttpServerRequest` to the process which performed the binding, and will expect a response that it can then send to the client. - -**Note: Paths bound using the HTTP server are *always* prefixed by the ProcessId of the process that bound them.** - -**Note 2: If a process creates a static binding by setting `cache` to `true`, the HTTP server will serve whatever bytes were in the accompanying `lazy_load_blob` to all GET requests on that path.** - -If a process uses `WebSocketBind` or `WebSocketSecureBind`, future WebSocket connections to that path will be sent to the process, which is expected to issue a response that can then be sent to the client. - -Bindings can be removed using `Unbind` and `WebSocketUnbind` actions. -Note that the HTTP server module will persist bindings until the node itself is restarted (and no later), so unbinding paths is usually not necessary unless cleaning up an old static resource. - -The incoming request, whether the binding is for HTTP or WebSocket, will look like this: -```rust -/// HTTP Request received from the `http-server:distro:sys` service as a -/// result of either an HTTP or WebSocket binding, created via [`HttpServerAction`]. -#[derive(Clone, Debug, Serialize, Deserialize)] -pub enum HttpServerRequest { - Http(IncomingHttpRequest), - /// Processes will receive this kind of request when a client connects to them. - /// If a process does not want this websocket open, they should issue a *request* - /// containing a [`HttpServerAction::WebSocketClose`] message and this channel ID. - WebSocketOpen { - path: String, - channel_id: u32, - }, - /// Processes can both SEND and RECEIVE this kind of request - /// (send as [`HttpServerAction::WebSocketPush`]). - /// When received, will contain the message bytes as lazy_load_blob. - WebSocketPush { - channel_id: u32, - message_type: WsMessageType, - }, - /// Receiving will indicate that the client closed the socket. Can be sent to close - /// from the server-side, as [`type@HttpServerAction::WebSocketClose`]. - WebSocketClose(u32), -} - -/// An HTTP request routed to a process as a result of a binding. -/// -/// BODY is stored in the lazy_load_blob, as bytes. -#[derive(Clone, Debug, Serialize, Deserialize)] -pub struct IncomingHttpRequest { - /// will parse to SocketAddr - pub source_socket_addr: Option, - /// will parse to http::Method - pub method: String, - /// will parse to url::Url - pub url: String, - /// the matching path that was bound - pub bound_path: String, - /// will parse to http::HeaderMap - pub headers: HashMap, - pub url_params: HashMap, - pub query_params: HashMap, -} -``` - -Processes that use the HTTP server should expect to field this request type, serialized to JSON. -The process must issue a response with this structure in the body, serialized to JSON: - -```rust -/// HTTP Response type that can be shared over Wasm boundary to apps. -/// Respond to [`IncomingHttpRequest`] with this type. -/// -/// BODY is stored in the lazy_load_blob, as bytes -#[derive(Debug, Serialize, Deserialize)] -pub struct HttpResponse { - pub status: u16, - pub headers: HashMap, -} -``` - -This response is only required for HTTP requests. -`WebSocketOpen`, `WebSocketPush`, and `WebSocketClose` requests do not require a response. -If a process is meant to send data over an open WebSocket connection, it must issue a `HttpServerAction::WebSocketPush` request with the appropriate `channel_id`. -Find discussion of the `HttpServerAction::WebSocketExt*` requests in the [extensions document](../system/process/extensions.md). diff --git a/src/apis/kernel.md b/src/apis/kernel.md deleted file mode 100644 index d4855748..00000000 --- a/src/apis/kernel.md +++ /dev/null @@ -1,132 +0,0 @@ -# Kernel API - -Generally, userspace applications will not have the capability to message the kernel. -Those that can, such as the app store, have full control over starting and stopping all userspace processes. - -The kernel runtime task accepts one kind of `Request`: -```rust -/// IPC format for requests sent to kernel runtime module -#[derive(Debug, Serialize, Deserialize)] -pub enum KernelCommand { - /// RUNTIME ONLY: used to notify the kernel that booting is complete and - /// all processes have been loaded in from their persisted or bootstrapped state. - Booted, - /// Tell the kernel to install and prepare a new process for execution. - /// The process will not begin execution until the kernel receives a - /// `RunProcess` command with the same `id`. - /// - /// The process that sends this command will be given messaging capabilities - /// for the new process if `public` is false. - /// - /// All capabilities passed into initial_capabilities must be held by the source - /// of this message, or the kernel will discard them (silently for now). - InitializeProcess { - id: ProcessId, - wasm_bytes_handle: String, - wit_version: Option, - on_exit: OnExit, - initial_capabilities: HashSet, - public: bool, - }, - /// Create an arbitrary capability and grant it to a process. - GrantCapabilities { - target: ProcessId, - capabilities: Vec, - }, - /// Drop capabilities. Does nothing if process doesn't have these caps - DropCapabilities { - target: ProcessId, - capabilities: Vec, - }, - /// Tell the kernel to run a process that has already been installed. - /// TODO: in the future, this command could be extended to allow for - /// resource provision. - RunProcess(ProcessId), - /// Kill a running process immediately. This may result in the dropping / mishandling of messages! - KillProcess(ProcessId), - /// RUNTIME ONLY: notify the kernel that the runtime is shutting down and it - /// should gracefully stop and persist the running processes. - Shutdown, - /// Ask kernel to produce debugging information - Debug(KernelPrint), -} -``` - -All `KernelCommand`s are sent in the body field of a `Request`, serialized to JSON. -Only `InitializeProcess`, `RunProcess`, and `KillProcess` will give back a `Response`, also serialized to JSON text bytes using `serde_json`: - -```rust -#[derive(Debug, Serialize, Deserialize)] -pub enum KernelResponse { - InitializedProcess, - InitializeProcessError, - StartedProcess, - RunProcessError, - KilledProcess(ProcessId), - Debug(KernelPrintResponse), -} - -#[derive(Debug, Serialize, Deserialize)] -pub enum KernelPrintResponse { - ProcessMap(UserspaceProcessMap), - Process(Option), - HasCap(Option), -} - -pub type UserspaceProcessMap = HashMap; - -#[derive(Clone, Debug, Serialize, Deserialize)] -pub struct UserspacePersistedProcess { - pub wasm_bytes_handle: String, - pub wit_version: Option, - pub on_exit: OnExit, - pub capabilities: HashSet, - pub public: bool, -} -``` - -## `Booted` - -Purely for internal use within the kernel. -Sent by the kernel, to the kernel, to indicate that all persisted processes have been initialized and are ready to run. - -## `InitializeProcess` - -The first command used to start a new process. -Generally available to apps via the `spawn()` function in the WIT interface. -The `wasm_bytes_handle` is a pointer generated by the [filesystem](../system/files.md) API — it should be a valid `.wasm` file compiled using the [Kinode tooling](../kit/kit-dev-toolkit.md). -The `on_panic` field is an enum that specifies what to do if the process panics. -The `initial_capabilities` field is a set of capabilities that the process will have access to — note that the capabilities are signed by this kernel. -The `public` field specifies whether the process should be visible to other processes *without* needing to grant a messaging capability. - -`InitializeProcess` must be sent with a `lazy_load_blob`. -The blob must be the same .wasm file, in raw bytes, that the `wasm_bytes_handle` points to. - -This will *not* cause the process to begin running. -To do that, send a `RunProcess` command after a successful `InitializeProcess` command. - -## `GrantCapabilities` -This command directly inserts a list of capabilities into another process' state. -While you generally don't want to do this for security reasons, it helps you clean up the "handshake" process by which capabilities must be handed off between two processes before engaging in the business logic. -For instance, if you want a kernel module like `http-server` to be able to message a process back, you do this by directly inserting that `"messaging"` cap into `http-server`'s store. -Only the `app-store`, `terminal`, and `tester` make use of this. - -## `DropCapabilities` -This command removes a list of capabilities from another process' state. -Currently, no app makes use of this, as it is very powerful. - -## `RunProcess` - -Takes a process ID and tells kernel to call the `init` function. -The process must have first been initialized with a successful `InitializeProcess`. - -## `KillProcess` - -Takes a process ID and kills it. -This is a dangerous operation as messages queued for the process will be lost. -The process will be removed from the kernel's process table and will no longer be able to receive messages. - -## `Shutdown` - -Send to the kernel in order to gracefully shut down the system. -The runtime must perform this request before exiting in order to see that all processes are properly cleaned up. diff --git a/src/apis/kinode_wit.md b/src/apis/kinode_wit.md deleted file mode 100644 index 5111648b..00000000 --- a/src/apis/kinode_wit.md +++ /dev/null @@ -1,48 +0,0 @@ -# `kinode.wit` - -Throughout this book, readers will see references to [WIT](https://component-model.bytecodealliance.org/design/wit.html), the [WebAssembly Component Model](https://github.com/WebAssembly/component-model). -WIT, or Wasm Interface Type, is a language for describing the types and functions that are available to a WebAssembly module. -In conjunction with the Component Model itself, WIT allows for the creation of WebAssembly modules that can be used as components in a larger system. -This standard has been under development for many years, and while still under construction, it's the perfect tool for building an operating-system-like environment for Wasm apps. - -Kinode uses WIT to present a standard interface for Kinode processes. -This interface is a set of types and functions that are available to all processes. -It also contains functions (well, just a single function: `init()`) that processes must implement in order to compile and run on Kinode. -If one can generate WIT bindings in a language that compiles to Wasm, that language can be used to write Kinode processes. -So far, we've written Kinode processes in Rust, Javascript, Go, and Python. - -To see exactly how to use WIT to write Kinode processes, see the [My First App](../my_first_app/chapter_1.md) chapter or the [Chess Tutorial](../chess_app/chess_engine.md). - -To see `kinode.wit` for itself, see the [file in the GitHub repo](https://github.com/kinode-dao/kinode-wit/blob/master/kinode.wit). -Since this interface applies to all processes, it's one of the places in the OS where breaking changes are most likely to make an impact. -To that end, the version of the WIT file that a process uses must be compatible with the version of Kinode on which it runs. -Kinode intends to achieve perfect backwards compatibility upon first major release (1.0.0) of the OS and the WIT file. -After that point, since processes signal the version of the WIT file they use, subsequent updates can be made without breaking existing processes or needing to change the version they use. - -## Types - -[These 15 types](https://github.com/kinode-dao/kinode-wit/blob/758fac1fb144f89c2a486778c62cbea2fb5840ac/kinode.wit#L8-L106) make up the entirety of the shared type system between processes and the kernel. -Most types presented here are implemented in the [process standard library](../process_stdlib/overview.md) for ease of use. - -## Functions - -[These 16 functions](https://github.com/kinode-dao/kinode-wit/blob/758fac1fb144f89c2a486778c62cbea2fb5840ac/kinode.wit#L108-L190) are available to processes. -They are implemented in the kernel. -Again, the process standard library makes it such that these functions often don't need to be directly called in processes, but they are always available. -The functions are generally separated into 4 categories: system utilities, process management, capabilities management, and message I/O. -Future versions of the WIT file will certainly add more functions, but the categories themselves are highly unlikely to change. - -System utilities are functions like `print_to_terminal`, whose role is to provide a way for processes to interact with the runtime in an idiosyncratic way. - -Process management functions are used to adjust a processes' state in the kernel. -This includes its state-store and its on-exit behavior. -This category is also responsible for functions that give processes the ability to spawn and manage child processes. - -Capabilities management functions relate to the capabilities-based security system imposed by the kernel on processes. -Processes must acquire and manage capabilities in order to perform tasks external to themselves, such as messaging another process or writing to a file. -See the [capabilities overview](../system/process/capabilities.md) for more details. - -Lastly, message I/O functions are used to send and receive messages between processes. -Message-passing is the primary means by which processes communicate not only with themselves, but also with runtime modules which expose all kinds of I/O abilities. -For example, handling an HTTP request involves sending and receiving messages to and from the `http-server:disto:sys` runtime module. -Interacting with this module and others occurs through message I/O. diff --git a/src/apis/kv.md b/src/apis/kv.md deleted file mode 100644 index 186d60ca..00000000 --- a/src/apis/kv.md +++ /dev/null @@ -1,204 +0,0 @@ -### KV API - -Useful helper functions can be found in the [`kinode_process_lib`](../process_stdlib/overview.md). -More discussion of databases in Kinode can be found [here](../system/databases.md). - -#### Creating/Opening a database - -```rust -use kinode_process_lib::kv; - -let kv = kv::open(our.package_id(), "birthdays")?; - -// You can now pass this KV struct as a reference to other functions -``` - -#### Set - -```rust -let key = b"hello"; -let value= b"world"; - -let returnvalue = kv.set(&key, &value, None)?; -// The third argument None is for tx_id. -// You can group sets and deletes and commit them later. -``` - -#### Get - -```rust -let key = b"hello"; - -let returnvalue = kv.get(&key)?; -``` - -#### Delete - -```rust -let key = b"hello"; - -kv.delete(&key, None)?; -``` - -#### Transactions - -```rust -let tx_id = kv.begin_tx()?; - -let key = b"hello"; -let key2 = b"deleteme"; -let value= b"value"; - -kv.set(&key, &value, Some(tx_id))?; -kv.delete(&key, Some(tx_id))?; - -kv.commit_tx(tx_id)?; -``` - -### API - -```rust -/// Actions are sent to a specific key value database. `db` is the name, -/// `package_id` is the [`PackageId`] that created the database. Capabilities -/// are checked: you can access another process's database if it has given -/// you the read and/or write capability to do so. -#[derive(Clone, Debug, Serialize, Deserialize)] -pub struct KvRequest { - pub package_id: PackageId, - pub db: String, - pub action: KvAction, -} - -/// IPC Action format representing operations that can be performed on the -/// key-value runtime module. These actions are included in a [`KvRequest`] -/// sent to the `kv:distro:sys` runtime module. -#[derive(Clone, Debug, Serialize, Deserialize)] -pub enum KvAction { - /// Opens an existing key-value database or creates a new one if it doesn't exist. - /// Requires `package_id` in [`KvRequest`] to match the package ID of the sender. - /// The sender will own the database and can remove it with [`KvAction::RemoveDb`]. - /// - /// A successful open will respond with [`KvResponse::Ok`]. Any error will be - /// contained in the [`KvResponse::Err`] variant. - Open, - /// Permanently deletes the entire key-value database. - /// Requires `package_id` in [`KvRequest`] to match the package ID of the sender. - /// Only the owner can remove the database. - /// - /// A successful remove will respond with [`KvResponse::Ok`]. Any error will be - /// contained in the [`KvResponse::Err`] variant. - RemoveDb, - /// Sets a value for the specified key in the database. - /// - /// # Parameters - /// * `key` - The key as a byte vector - /// * `tx_id` - Optional transaction ID if this operation is part of a transaction - /// * blob: [`Vec`] - Byte vector to store for the key - /// - /// Using this action requires the sender to have the write capability - /// for the database. - /// - /// A successful set will respond with [`KvResponse::Ok`]. Any error will be - /// contained in the [`KvResponse::Err`] variant. - Set { key: Vec, tx_id: Option }, - /// Deletes a key-value pair from the database. - /// - /// # Parameters - /// * `key` - The key to delete as a byte vector - /// * `tx_id` - Optional transaction ID if this operation is part of a transaction - /// - /// Using this action requires the sender to have the write capability - /// for the database. - /// - /// A successful delete will respond with [`KvResponse::Ok`]. Any error will be - /// contained in the [`KvResponse::Err`] variant. - Delete { key: Vec, tx_id: Option }, - /// Retrieves the value associated with the specified key. - /// - /// # Parameters - /// * The key to look up as a byte vector - /// - /// Using this action requires the sender to have the read capability - /// for the database. - /// - /// A successful get will respond with [`KvResponse::Get`], where the response blob - /// contains the value associated with the key if any. Any error will be - /// contained in the [`KvResponse::Err`] variant. - Get(Vec), - /// Begins a new transaction for atomic operations. - /// - /// Sending this will prompt a [`KvResponse::BeginTx`] response with the - /// transaction ID. Any error will be contained in the [`KvResponse::Err`] variant. - BeginTx, - /// Commits all operations in the specified transaction. - /// - /// # Parameters - /// * `tx_id` - The ID of the transaction to commit - /// - /// A successful commit will respond with [`KvResponse::Ok`]. Any error will be - /// contained in the [`KvResponse::Err`] variant. - Commit { tx_id: u64 }, -} - -#[derive(Clone, Debug, Serialize, Deserialize)] -pub enum KvResponse { - /// Indicates successful completion of an operation. - /// Sent in response to actions Open, RemoveDb, Set, Delete, and Commit. - Ok, - /// Returns the transaction ID for a newly created transaction. - /// - /// # Fields - /// * `tx_id` - The ID of the newly created transaction - BeginTx { tx_id: u64 }, - /// Returns the value for the key that was retrieved from the database. - /// - /// # Parameters - /// * The retrieved key as a byte vector - /// * blob: [`Vec`] - Byte vector associated with the key - Get(Vec), - /// Indicates an error occurred during the operation. - Err(KvError), -} - -#[derive(Clone, Debug, Serialize, Deserialize, Error)] -pub enum KvError { - #[error("db [{0}, {1}] does not exist")] - NoDb(PackageId, String), - #[error("key not found")] - KeyNotFound, - #[error("no transaction {0} found")] - NoTx(u64), - #[error("no write capability for requested DB")] - NoWriteCap, - #[error("no read capability for requested DB")] - NoReadCap, - #[error("request to open or remove DB with mismatching package ID")] - MismatchingPackageId, - #[error("failed to generate capability for new DB")] - AddCapFailed, - #[error("kv got a malformed request that either failed to deserialize or was missing a required blob")] - MalformedRequest, - #[error("RocksDB internal error: {0}")] - RocksDBError(String), - #[error("IO error: {0}")] - IOError(String), -} - -/// The JSON parameters contained in all capabilities issued by `kv:distro:sys`. -/// -/// # Fields -/// * `kind` - The kind of capability, either [`KvCapabilityKind::Read`] or [`KvCapabilityKind::Write`] -/// * `db_key` - The database key, a tuple of the [`PackageId`] that created the database and the database name -#[derive(Clone, Debug, Serialize, Deserialize)] -pub struct KvCapabilityParams { - pub kind: KvCapabilityKind, - pub db_key: (PackageId, String), -} - -#[derive(Clone, Debug, Serialize, Deserialize)] -#[serde(rename_all = "lowercase")] -pub enum KvCapabilityKind { - Read, - Write, -} -``` diff --git a/src/apis/net.md b/src/apis/net.md deleted file mode 100644 index df9cc941..00000000 --- a/src/apis/net.md +++ /dev/null @@ -1,113 +0,0 @@ -# Net API - -Most processes will not use this API directly. -Instead, processes will make use of the networking protocol simply by sending messages to processes running on other nodes. -This API is documented, rather, for those who wish to implement their own networking protocol. - -The networking API is implemented in the `net:distro:sys` process. - -For the specific networking protocol, see the [networking protocol](../system/networking_protocol.md) chapter. -This chapter is rather to describe the message-based API that the `net:distro:sys` process exposes. - -`Net`, like all processes and runtime modules, is architected around a main message-receiving loop. -The received `Request`s are handled in one of three ways: - -- If the `target.node` is "our domain", i.e. the domain name of the local node, and the `source.node` is also our domain, the message is parsed and treated as either a debugging command or one of the `NetActions` enum. - -- If the `target.node` is our domain, but the `source.node` is not, the message is either parsed as the `NetActions` enum, or if it fails to parse, is treated as a "hello" message and printed in the terminal, size permitting. This "hello" protocol simply attempts to display the `message.body` as a UTF-8 string and is mostly used for network debugging. - -- If the `source.node` is our domain, but the `target.node` is not, the message is sent to the target using the [networking protocol](../system/networking_protocol.md) implementation. - -Let's look at `NetActions`. Note that this message type can be received from remote or local processes. -Different implementations of the networking protocol may reject actions depending on whether they were instigated locally or remotely, and also discriminate on which remote node sent the action. -This is, for example, where a router would choose whether or not to perform routing for a specific node<>node connection. - -```rust -/// Must be parsed from message pack vector. -/// all Get actions must be sent from local process. used for debugging -#[derive(Clone, Debug, Serialize, Deserialize)] -pub enum NetAction { - /// Received from a router of ours when they have a new pending passthrough for us. - /// We should respond (if we desire) by using them to initialize a routed connection - /// with the NodeId given. - ConnectionRequest(NodeId), - /// can only receive from trusted source: requires net root cap - KnsUpdate(KnsUpdate), - /// can only receive from trusted source: requires net root cap - KnsBatchUpdate(Vec), - /// get a list of peers we are connected to - GetPeers, - /// get the [`Identity`] struct for a single peer - GetPeer(String), - /// get a user-readable diagnostics string containing networking inforamtion - GetDiagnostics, - /// sign the attached blob payload, sign with our node's networking key. - /// **only accepted from our own node** - /// **the source [`Address`] will always be prepended to the payload** - Sign, - /// given a message in blob payload, verify the message is signed by - /// the given source. if the signer is not in our representation of - /// the PKI, will not verify. - /// **the `from` [`Address`] will always be prepended to the payload** - Verify { from: Address, signature: Vec }, -} - -#[derive(Clone, Debug, Serialize, Deserialize, Hash, Eq, PartialEq)] -pub struct KnsUpdate { - pub name: String, - pub public_key: String, - pub ips: Vec, - pub ports: BTreeMap, - pub routers: Vec, -} -``` - -This type must be parsed from a request body using MessagePack. -`ConnectionRequest` is sent by remote nodes as part of the WebSockets networking protocol in order to ask a router to connect them to a node that they can't connect to directly. -This is responded to with either an `Accepted` or `Rejected` variant of `NetResponses`. - -`KnsUpdate` and `KnsBatchUpdate` both are used as entry point by which the `net` module becomes aware of the Kinode PKI, or KNS. -In the current distro these are only accepted from the local node, and specifically the `kns-indexer` distro package. - -`GetPeers` is used to request a list of peers that the `net` module is connected to. It can only be received from the local node. - -`GetPeer` is used to request the `Identity` struct for a single peer. It can only be received from the local node. - -`GetName` is used to request the `NodeId` associated with a given namehash. It can only be received from the local node. - -`GetDiagnostics` is used to request a user-readable diagnostics string containing networking information. It can only be received from the local node. - -`Sign` is used to request that the attached blob payload be signed with our node's networking key. It can only be received from the local node. - -`Verify` is used to request that the attached blob payload be verified as being signed by the given source. It can only be received from the local node. - - -Finally, let's look at the type parsed from a `Response`. - -```rust -/// Must be parsed from message pack vector -#[derive(Clone, Debug, Serialize, Deserialize)] -pub enum NetResponse { - /// response to [`NetAction::ConnectionRequest`] - Accepted(NodeId), - /// response to [`NetAction::ConnectionRequest`] - Rejected(NodeId), - /// response to [`NetAction::GetPeers`] - Peers(Vec), - /// response to [`NetAction::GetPeer`] - Peer(Option), - /// response to [`NetAction::GetDiagnostics`]. a user-readable string. - Diagnostics(String), - /// response to [`NetAction::Sign`]. contains the signature in blob - Signed, - /// response to [`NetAction::Verify`]. boolean indicates whether - /// the signature was valid or not. note that if the signer node - /// cannot be found in our representation of PKI, this will return false, - /// because we cannot find the networking public key to verify with. - Verified(bool), -} -``` - -This type must be also be parsed using MessagePack, this time from responses received by `net`. - -In the future, `NetActions` and `NetResponses` may both expand to cover message types required for implementing networking protocols other than the WebSockets one. diff --git a/src/apis/overview.md b/src/apis/overview.md deleted file mode 100644 index e69de29b..00000000 diff --git a/src/apis/sqlite.md b/src/apis/sqlite.md deleted file mode 100644 index f4c4a9a0..00000000 --- a/src/apis/sqlite.md +++ /dev/null @@ -1,221 +0,0 @@ -### SQLite API - -Useful helper functions can be found in the [`kinode_process_lib`](../process_stdlib/overview.md). -More discussion of databases in Kinode can be found [here](../system/databases.md). - -#### Creating/Opening a database - -```rust -use kinode_process_lib::sqlite; - -let db = sqlite::open(our.package_id(), "users")?; -// You can now pass this SQLite struct as a reference to other functions -``` - -#### Write - -```rust -let statement = "INSERT INTO users (name) VALUES (?), (?), (?);".to_string(); -let params = vec![ -serde_json::Value::String("Bob".to_string()), -serde_json::Value::String("Charlie".to_string()), -serde_json::Value::String("Dave".to_string()), -]; - -sqlite.write(statement, params, None)?; -``` - -#### Read - -```rust -let query = "SELECT FROM users;".to_string(); -let rows = sqlite.read(query, vec![])?; -// rows: Vec> -println!("rows: {}", rows.len()); -for row in rows { - println!(row.get("name")); -} -``` - -#### Transactions - -```rust -let tx_id = sqlite.begin_tx()?; - -let statement = "INSERT INTO users (name) VALUES (?);".to_string(); -let params = vec![serde_json::Value::String("Eve".to_string())]; -let params2 = vec![serde_json::Value::String("Steve".to_string())]; - -sqlite.write(statement, params, Some(tx_id))?; -sqlite.write(statement, params2, Some(tx_id))?; - -sqlite.commit_tx(tx_id)?; -``` - -### API - -```rust -/// Actions are sent to a specific SQLite database. `db` is the name, -/// `package_id` is the [`PackageId`] that created the database. Capabilities -/// are checked: you can access another process's database if it has given -/// you the read and/or write capability to do so. -#[derive(Clone, Debug, Serialize, Deserialize)] -pub struct SqliteRequest { - pub package_id: PackageId, - pub db: String, - pub action: SqliteAction, -} - -/// IPC Action format representing operations that can be performed on the -/// SQLite runtime module. These actions are included in a [`SqliteRequest`] -/// sent to the `sqlite:distro:sys` runtime module. -#[derive(Clone, Debug, Serialize, Deserialize)] -pub enum SqliteAction { - /// Opens an existing key-value database or creates a new one if it doesn't exist. - /// Requires `package_id` in [`SqliteRequest`] to match the package ID of the sender. - /// The sender will own the database and can remove it with [`SqliteAction::RemoveDb`]. - /// - /// A successful open will respond with [`SqliteResponse::Ok`]. Any error will be - /// contained in the [`SqliteResponse::Err`] variant. - Open, - /// Permanently deletes the entire key-value database. - /// Requires `package_id` in [`SqliteRequest`] to match the package ID of the sender. - /// Only the owner can remove the database. - /// - /// A successful remove will respond with [`SqliteResponse::Ok`]. Any error will be - /// contained in the [`SqliteResponse::Err`] variant. - RemoveDb, - /// Executes a write statement (INSERT/UPDATE/DELETE) - /// - /// * `statement` - SQL statement to execute - /// * `tx_id` - Optional transaction ID - /// * blob: Vec - Parameters for the SQL statement, where SqlValue can be: - /// - null - /// - boolean - /// - i64 - /// - f64 - /// - String - /// - Vec (binary data) - /// - /// Using this action requires the sender to have the write capability - /// for the database. - /// - /// A successful write will respond with [`SqliteResponse::Ok`]. Any error will be - /// contained in the [`SqliteResponse::Err`] variant. - Write { - statement: String, - tx_id: Option, - }, - /// Executes a read query (SELECT) - /// - /// * blob: Vec - Parameters for the SQL query, where SqlValue can be: - /// - null - /// - boolean - /// - i64 - /// - f64 - /// - String - /// - Vec (binary data) - /// - /// Using this action requires the sender to have the read capability - /// for the database. - /// - /// A successful query will respond with [`SqliteResponse::Query`], where the - /// response blob contains the results of the query. Any error will be contained - /// in the [`SqliteResponse::Err`] variant. - Query(String), - /// Begins a new transaction for atomic operations. - /// - /// Sending this will prompt a [`SqliteResponse::BeginTx`] response with the - /// transaction ID. Any error will be contained in the [`SqliteResponse::Err`] variant. - BeginTx, - /// Commits all operations in the specified transaction. - /// - /// # Parameters - /// * `tx_id` - The ID of the transaction to commit - /// - /// A successful commit will respond with [`SqliteResponse::Ok`]. Any error will be - /// contained in the [`SqliteResponse::Err`] variant. - Commit { tx_id: u64 }, -} - -#[derive(Clone, Debug, Serialize, Deserialize)] -pub enum SqliteResponse { - /// Indicates successful completion of an operation. - /// Sent in response to actions Open, RemoveDb, Write, Query, BeginTx, and Commit. - Ok, - /// Returns the results of a query. - /// - /// * blob: Vec> - Array of rows, where each row contains SqlValue types: - /// - null - /// - boolean - /// - i64 - /// - f64 - /// - String - /// - Vec (binary data) - Read, - /// Returns the transaction ID for a newly created transaction. - /// - /// # Fields - /// * `tx_id` - The ID of the newly created transaction - BeginTx { tx_id: u64 }, - /// Indicates an error occurred during the operation. - Err(SqliteError), -} - -/// Used in blobs to represent array row values in SQLite. -#[derive(Clone, Debug, Serialize, Deserialize, PartialEq)] -pub enum SqlValue { - Integer(i64), - Real(f64), - Text(String), - Blob(Vec), - Boolean(bool), - Null, -} - -#[derive(Clone, Debug, Serialize, Deserialize, Error)] -pub enum SqliteError { - #[error("db [{0}, {1}] does not exist")] - NoDb(PackageId, String), - #[error("no transaction {0} found")] - NoTx(u64), - #[error("no write capability for requested DB")] - NoWriteCap, - #[error("no read capability for requested DB")] - NoReadCap, - #[error("request to open or remove DB with mismatching package ID")] - MismatchingPackageId, - #[error("failed to generate capability for new DB")] - AddCapFailed, - #[error("write statement started with non-existent write keyword")] - NotAWriteKeyword, - #[error("read query started with non-existent read keyword")] - NotAReadKeyword, - #[error("parameters blob in read/write was misshapen or contained invalid JSON objects")] - InvalidParameters, - #[error("sqlite got a malformed request that failed to deserialize")] - MalformedRequest, - #[error("rusqlite error: {0}")] - RusqliteError(String), - #[error("IO error: {0}")] - IOError(String), -} - -/// The JSON parameters contained in all capabilities issued by `sqlite:distro:sys`. -/// -/// # Fields -/// * `kind` - The kind of capability, either [`SqliteCapabilityKind::Read`] or [`SqliteCapabilityKind::Write`] -/// * `db_key` - The database key, a tuple of the [`PackageId`] that created the database and the database name -#[derive(Clone, Debug, Serialize, Deserialize)] -pub struct SqliteCapabilityParams { - pub kind: SqliteCapabilityKind, - pub db_key: (PackageId, String), -} - -#[derive(Clone, Debug, Serialize, Deserialize)] -#[serde(rename_all = "lowercase")] -pub enum SqliteCapabilityKind { - Read, - Write, -} -``` diff --git a/src/apis/terminal.md b/src/apis/terminal.md deleted file mode 100644 index efb01986..00000000 --- a/src/apis/terminal.md +++ /dev/null @@ -1,41 +0,0 @@ -# Terminal API - -It is extremely rare for an app to have direct access to the terminal api. -Normally, the terminal will be used to call scripts, which will have access to the process in question. -For documentation on using, writing, publishing, and composing scripts, see the [terminal use documentation](../system/terminal.md), or for a quick start, the [script cookbook](../cookbook/writing_scripts.md). - -The Kinode terminal is broken up into two segments: a Wasm app, called `terminal:terminal:sys`, and a runtime module called `terminal:distro:sys`. -The Wasm app is the central area where terminal logic and authority live. -It parses `Requests` by attempting to read the `body` field as a UTF-8 string, then parsing that string into various commands to perform. -The runtime module exists in order to actually use this app from the terminal which is launched by starting Kinode. -It manages the raw input and presents an interface with features such as command history, text manipulation, and shortcuts. - -To "use" the terminal as an API, one simply needs the capability to message `terminal:terminal:sys`. -This is a powerful capability, equivalent to giving an application `root` authority over your node. -For this reason, users are unlikely to grant direct terminal access to most apps. - -If one does have the capability to send `Request`s to the terminal, they can execute commands like so: -``` -script-name:package-name:publisher-name -``` - -For example, the `hi` script, which pings another node's terminal with a message, can be called like so: -``` -hi:terminal:sys default-router-1.os what's up? -``` -In this case, the arguments are both `default-router-1.os` and the message `what's up?`. - -Some commonly used scripts have shorthand aliases because they are invoked so frequently. -For example, `hi:terminal:sys` can be shortened to just `hi` as in: -``` -hi default-router-1.os what's up? -``` - -The other most commonly used script is `m:terminal:sys`, or just `m` - which stands for `Message`. -`m` lets you send a request to any node or application like so: -``` -m some-node.os@proc:pkg:pub '{"foo":"bar"}' -``` - -Note that if your process has the ability to message the `terminal` app, then that process can call any script. -However, they will all have this standard calling convention of ` `. diff --git a/src/apis/timer.md b/src/apis/timer.md deleted file mode 100644 index 676eeb1c..00000000 --- a/src/apis/timer.md +++ /dev/null @@ -1,24 +0,0 @@ -# Timer API - -The Timer API allows processes to manage time-based operations within Kinode. -This API provides a simple yet powerful mechanism for scheduling actions to be executed after a specified delay. -The entire API is just the `TimerAction`: - -```rust -pub enum TimerAction { - Debug, - SetTimer(u64), -} -``` -This defines just two actions: `Debug` and `SetTimer` -## `Debug` -This action will print information about all active timers to the terminal. -## `SetTimer` -This lets you set a timer to pop after a set number of milliseconds, so e.g. `{"SetTimer": 1000}` would pop after one second. -The timer finishes by sending a `Response` once the timer has popped. -The response will have no information in the `body`. -To keep track of different timers, you can use two methods: -- `send_and_await_response` which will block your app while it is waiting - - use [`kinode_process_lib::timer::set_and_await_timer`](https://docs.rs/kinode_process_lib/0.0.0-reserved/kinode_process_lib/timer/fn.set_and_await_timer.html) for this -- use `context` to keep track of multiple timers without blocking - - use [`kinode_process_lib::timer::set_timer`](https://docs.rs/kinode_process_lib/0.0.0-reserved/kinode_process_lib/timer/fn.set_timer.html) to set the timer with optional context diff --git a/src/apis/vfs.md b/src/apis/vfs.md deleted file mode 100644 index dc8dfa99..00000000 --- a/src/apis/vfs.md +++ /dev/null @@ -1,286 +0,0 @@ -# VFS API - -Useful helper functions can be found in the [`kinode_process_lib`](https://github.com/kinode-dao/process_lib) - -The VFS API tries to map over the [`std::fs`](https://doc.rust-lang.org/std/fs/index.html) calls as directly as possible. - -Every request takes a path and a corresponding action. - -## Drives - -A drive is a directory within a package's VFS directory, e.g., `app-store:sys/pkg/` or `your_package:publisher.os/my_drive/`. -Drives are owned by packages. -Packages can share access to drives they own via [capabilities](../system/process/capabilities.md). -Each package is spawned with two drives: [`pkg/`](#pkg-drive) and [`tmp/`](#tmp-drive). -All processes in a package have caps to those drives. -Processes can also create additional drives. - -### `pkg/` drive - -The `pkg/` drive contains metadata about the package that Kinode requires to run that package, `.wasm` binaries, and optionally the API of the package and the UI. -When creating packages, the `pkg/` drive is populated by [`kit build`](../kit/build.md) and loaded into the Kinode using [`kit start-package`](../kit/start-package.md). - -### `tmp/` drive - -The `tmp/` drive can be written to directly by the owning package using standard filesystem functionality (i.e. `std::fs` in Rust) via WASI in addition to the Kinode VFS. - -### Imports - -```rust -use kinode_process_lib::vfs::{ - create_drive, open_file, open_dir, create_file, metadata, File, Directory, -}; -``` - -### Opening/Creating a Drive - -```rust -let drive_path: String = create_drive(our.package_id(), "drive_name")?; -// you can now prepend this path to any files/directories you're interacting with -let file = open_file(&format!("{}/hello.txt", &drive_path), true); -``` - -### Sharing a Drive Capability - -```rust -let vfs_read_cap = serde_json::json!({ - "kind": "read", - "drive": drive_path, -}).to_string(); - -let vfs_address = Address { - node: our.node.clone(), - process: ProcessId::from_str("vfs:distro:sys").unwrap(), -}; - -// get this capability from our store -let cap = get_capability(&vfs_address, &vfs_read_cap); - -// now if we have that Capability, we can attach it to a subsequent message. -if let Some(cap) = cap { - Request::new() - .capabilities(vec![cap]) - .body(b"hello".to_vec()) - .send()?; -} -``` - -```rust -// the receiving process can then save the capability to it's store, and open the drive. -save_capabilities(incoming_request.capabilities); -let dir = open_dir(&drive_path, false)?; -``` - -### Files - -#### Open a File - -```rust -/// Opens a file at path, if no file at path, creates one if boolean create is true. -let file_path = format!("{}/hello.txt", &drive_path); -let file = open_file(&file_path, true); -``` - -#### Create a File - -```rust -/// Creates a file at path, if file found at path, truncates it to 0. -let file_path = format!("{}/hello.txt", &drive_path); -let file = create_file(&file_path); -``` - -#### Read a File - -```rust -/// Reads the entire file, from start position. -/// Returns a vector of bytes. -let contents = file.read()?; -``` - -#### Write a File - -```rust -/// Write entire slice as the new file. -/// Truncates anything that existed at path before. -let buffer = b"Hello!"; -file.write(&buffer)?; -``` - -#### Write to File - -```rust -/// Write buffer to file at current position, overwriting any existing data. -let buffer = b"World!"; -file.write_all(&buffer)?; -``` - -#### Read at position - -```rust -/// Read into buffer from current cursor position -/// Returns the amount of bytes read. -let mut buffer = vec![0; 5]; -file.read_at(&buffer)?; -``` - -#### Set Length - -```rust -/// Set file length, if given size > underlying file, fills it with 0s. -file.set_len(42)?; -``` - -#### Seek to a position - -```rust -/// Seek file to position. -/// Returns the new position. -let position = SeekFrom::End(0); -file.seek(&position)?; -``` - -#### Sync - -```rust -/// Syncs path file buffers to disk. -file.sync_all()?; -``` - -#### Metadata - -```rust -/// Metadata of a path, returns file type and length. -let metadata = file.metadata()?; -``` - -### Directories - -#### Open a Directory - -```rust -/// Opens or creates a directory at path. -/// If trying to create an existing file, will just give you the path. -let dir_path = format!("{}/my_pics", &drive_path); -let dir = open_dir(&dir_path, true); -``` - -#### Read a Directory - -```rust -/// Iterates through children of directory, returning a vector of DirEntries. -/// DirEntries contain the path and file type of each child. -let entries = dir.read()?; -``` - -#### General path Metadata - -```rust -/// Metadata of a path, returns file type and length. -let some_path = format!("{}/test", &drive_path); -let metadata = metadata(&some_path)?; -``` - -### API - -```rust -/// IPC Request format for the vfs:distro:sys runtime module. -#[derive(Debug, Serialize, Deserialize)] -pub struct VfsRequest { - pub path: String, - pub action: VfsAction, -} - -#[derive(Debug, Serialize, Deserialize, PartialEq)] -pub enum VfsAction { - CreateDrive, - CreateDir, - CreateDirAll, - CreateFile, - OpenFile { create: bool }, - CloseFile, - Write, - WriteAll, - Append, - SyncAll, - Read, - ReadDir, - ReadToEnd, - ReadExact { length: u64 }, - ReadToString, - Seek(SeekFrom), - RemoveFile, - RemoveDir, - RemoveDirAll, - Rename { new_path: String }, - Metadata, - AddZip, - CopyFile { new_path: String }, - Len, - SetLen(u64), - Hash, -} - -#[derive(Debug, Serialize, Deserialize, PartialEq)] -pub enum SeekFrom { - Start(u64), - End(i64), - Current(i64), -} - -#[derive(Debug, Serialize, Deserialize)] -pub enum FileType { - File, - Directory, - Symlink, - Other, -} - -#[derive(Debug, Serialize, Deserialize)] -pub struct FileMetadata { - pub file_type: FileType, - pub len: u64, -} - -#[derive(Debug, Serialize, Deserialize)] -pub struct DirEntry { - pub path: String, - pub file_type: FileType, -} - -#[derive(Debug, Serialize, Deserialize)] -pub enum VfsResponse { - Ok, - Err(VfsError), - Read, - SeekFrom { new_offset: u64 }, - ReadDir(Vec), - ReadToString(String), - Metadata(FileMetadata), - Len(u64), - Hash([u8; 32]), -} - -#[derive(Error, Debug, Serialize, Deserialize)] -pub enum VfsError { - #[error("No capability for action {action} at path {path}")] - NoCap { action: String, path: String }, - #[error("Bytes blob required for {action} at path {path}")] - BadBytes { action: String, path: String }, - #[error("bad request error: {error}")] - BadRequest { error: String }, - #[error("error parsing path: {path}: {error}")] - ParseError { error: String, path: String }, - #[error("IO error: {error}, at path {path}")] - IOError { error: String, path: String }, - #[error("kernel capability channel error: {error}")] - CapChannelFail { error: String }, - #[error("Bad JSON blob: {error}")] - BadJson { error: String }, - #[error("File not found at path {path}")] - NotFound { path: String }, - #[error("Creating directory failed at path: {path}: {error}")] - CreateDirError { path: String, error: String }, - #[error("Other error: {error}")] - Other { error: String }, -} -``` diff --git a/src/apis/websocket.md b/src/apis/websocket.md deleted file mode 100644 index 75028eee..00000000 --- a/src/apis/websocket.md +++ /dev/null @@ -1,143 +0,0 @@ -# WebSocket API - -WebSocket connections are made with a Rust `warp` server in the core `http-server:distro:sys` process. -Each connection is assigned a `channel_id` that can be bound to a given process using a `WsRegister` message. -The process receives the `channel_id` for pushing data into the WebSocket, and any subsequent messages from that client will be forwarded to the bound process. - -## Opening a WebSocket Channel from a Client - -To open a WebSocket channel, connect to the main route on the node `/` and send a `WsRegister` message as either text or bytes. - -The simplest way to connect from a browser is to use the `@kinode/client-api` like so: - -```rs -const api = new KinodeEncryptorApi({ - nodeId: window.our.node, // this is set if the /our.js script is present in index.html - processId: "my-package:my-package:template.os", - onOpen: (_event, api) => { - console.log('Connected to Kinode') - // Send a message to the node via WebSocket - api.send({ data: 'Hello World' }) - }, -}) -``` - -`@kinode/client-api` is available here: [https://www.npmjs.com/package/@kinode/client-api](https://www.npmjs.com/package/@kinode/client-api) - -Simple JavaScript/JSON example: - -```rs -function getCookie(name) { - const cookies = document.cookie.split(';'); - for (let i = 0; i < cookies.length; i++) { - const cookie = cookies[i].trim(); - if (cookie.startsWith(name)) { - return cookie.substring(name.length + 1); - } - } -} - -const websocket = new WebSocket("http://localhost:8080/"); - -const message = JSON.stringify({ - "auth_token": getCookie(`kinode-auth_${nodeId}`), - "target_process": "my-package:my-package:template.os", - "encrypted": false, -}); - -websocket.send(message); -``` - -## Handling Incoming WebSocket Messages - -Incoming WebSocket messages will be enums of `HttpServerRequest` with type `WebSocketOpen`, `WebSocketPush`, or `WebSocketClose`. - -You will want to store the `channel_id` that comes in with `WebSocketOpen` so that you can push data to that WebSocket. -If you expect to have more than one client connected at a time, then you will most likely want to store the channel IDs in a Set (Rust `HashSet`). - -With a `WebSocketPush`, the incoming message will be on the `LazyLoadBlob`, accessible with `get_blob()`. - -`WebSocketClose` will have the `channel_id` of the closed channel, so that you can remove it from wherever you are storing it. - -A full example: - -```rs -fn handle_http-server_request( - our: &Address, - message_archive: &mut MessageArchive, - source: &Address, - body: &[u8], - channel_ids: &mut HashSet, -) -> anyhow::Result<()> { - let Ok(server_request) = serde_json::from_slice::(body) else { - // Fail silently if we can't parse the request - return Ok(()); - }; - - match server_request { - HttpServerRequest::WebSocketOpen { channel_id, .. } => { - // Set our channel_id to the newly opened channel - // Note: this code could be improved to support multiple channels - channel_ids.insert(channel_id); - } - HttpServerRequest::WebSocketPush { .. } => { - let Some(blob) = get_blob() else { - return Ok(()); - }; - - handle_chat_request( - our, - message_archive, - our_channel_id, - source, - &blob.bytes, - false, - )?; - } - HttpServerRequest::WebSocketClose(_channel_id) => { - channel_ids.remove(channel_id); - } - HttpServerRequest::Http(IncomingHttpRequest { method, url, bound_path, .. }) => { - // Handle incoming HTTP requests here - } - }; - - Ok(()) -} -``` - -## Pushing Data to a Client via WebSocket - -Pushing data to a connected WebSocket is very simple. Call the `send_ws_push` function from `process_lib`: - -```rs -pub fn send_ws_push( - node: String, - channel_id: u32, - message_type: WsMessageType, - blob: LazyLoadBlob, -) -> anyhow::Result<()> -``` - -`node` will usually be `our.node` (although you can also send a WS push to another node's `http-server`!), `channel_id` is the client you want to send to, `message_type` will be either `WsMessageType::Text` or `WsMessageType::Binary`, and `blob` will be a standard `LazyLoadBlob` with an optional `mime` field and required `bytes` field. - -If you would prefer to send the request without the helper function, this is that what `send_ws_push` looks like under the hood: - -```rs -Request::new() - .target(Address::new( - node, - ProcessId::from_str("http-server:distro:sys").unwrap(), - )) - .body( - serde_json::json!(HttpServerRequest::WebSocketPush { - channel_id, - message_type, - }) - .to_string() - .as_bytes() - .to_vec(), - ) - .blob(blob) - .send()?; -``` diff --git a/src/audits-and-security.md b/src/audits-and-security.md deleted file mode 100644 index 70d91862..00000000 --- a/src/audits-and-security.md +++ /dev/null @@ -1,14 +0,0 @@ -# Audits and Security - -The Kinode operating system runtime has been audited by [Enigma Dark](https://www.enigmadark.com/). -That report can be found [here](https://github.com/Enigma-Dark/security-review-reports/blob/main/2024-11-18_Architecture_Review_Report_Kinode.pdf). - -However, the audit was not comprehensive and focused on the robustness of the networking stack and the kernel. -Therefore, other parts of the runtime, such as the filesystem modules and the ETH RPC layer, remain unaudited. -Kinode OS remains a work in progress and will continue to be audited as it matures. - -### Smart Contracts - -Kinode OS uses a number of smart contracts to manage global state. -Audits below: -- [Kimap audit](https://cantina.xyz/portfolio/c2cbcbe7-727c-47cf-99f1-4e82ea8e5c77) by [Spearbit](https://spearbit.com/) diff --git a/src/chess_app/chat.md b/src/chess_app/chat.md deleted file mode 100644 index 472d45b5..00000000 --- a/src/chess_app/chat.md +++ /dev/null @@ -1,252 +0,0 @@ -# Extension 1: Chat - -So, at this point you've got a working chess game with a frontend. -There are a number of obvious improvements to the program to be made, as listed at the end of the [last chapter](./putting_everything_together.md). -The best way to understand those improvements is to start exploring other areas of the docs, such as the chapters on [capabilities-based security](../system/process/capabilities.md) and the [networking protocol](../system/networking_protocol.md), for error handling. - -This chapter will instead focus on how to *extend* an existing program with new functionality. -Chat is a basic feature for a chess program, but will touch the existing code in many places. -This will give you a good idea of how to extend your own programs. - -You need to alter at least 4 things about the program: -- The request-response types it can handle (i.e. the protocol itself) -- The incoming request handler for HTTP requests, to receive chats sent by `our` node -- The outgoing websocket update, to send received chats to the frontend -- The frontend, to display the chat - -Handling them in that order, first, look at the types used for request-response now: -```rust -#[derive(Debug, Serialize, Deserialize)] -enum ChessRequest { - NewGame { white: String, black: String }, - Move { game_id: String, move_str: String }, - Resign(String), -} - -#[derive(Debug, Eq, PartialEq, Serialize, Deserialize)] -enum ChessResponse { - NewGameAccepted, - NewGameRejected, - MoveAccepted, - MoveRejected, -} -``` - -These types need to be exhaustive, since incoming messages will be fed into a `match` statement that uses `ChessRequest` and `ChessResponse`. -For more complex apps, one could introduce a new type that serves as an umbrella over multiple "kinds" of message, but since a simple chat will only be a few extra entries into the existing types, it's unnecessary for this example. - -In order to add chat, the request type above will need a new variant, something like `Message(String)`. -It doesn't need a `from` field, since that's just the `source` of the message! - -A new response type will make the chat more robust, by acknowledging received messages. -Something like `MessageAck` will do, with no fields — since this will be sent in response to a `Message` request, the sender will know which message it's acknowledging. - -The new types will look like this: -```rust -#[derive(Debug, Serialize, Deserialize)] -enum ChessRequest { - NewGame { white: String, black: String }, - Move { game_id: String, move_str: String }, - Resign(String), - Message(String), -} - -#[derive(Debug, Eq, PartialEq, Serialize, Deserialize)] -enum ChessResponse { - NewGameAccepted, - NewGameRejected, - MoveAccepted, - MoveRejected, - MessageAck, -} -``` - -If you are modifying these types inside the finished chess app from this tutorial, your IDE should indicate that there are a few errors now: these new message types are not handled in their respective `match` statements. -Those errors, in `handle_chess_request` and `handle_local_request`, are where you'll need logic to handle messages other nodes send to this node, and messages this node sends to others, respectively. - -In `handle_chess_request`, the app receives requests from other nodes. -A reasonable way to handle incoming messages is to add them to a vector of messages that's saved for each active game. -The frontend could reflect this by adding a chat box next to each game, and displaying all messages sent over that game's duration. - -To do that, the `Game` struct must be altered to hold such a vector. - -```rust -struct Game { - pub id: String, // the node with whom we are playing - pub turns: u64, - pub board: String, - pub white: String, - pub black: String, - pub ended: bool, - /// messages stored in order as (sender, content) - pub messages: Vec<(String, String)>, -} -``` - -Then in the main switch statement in `handle_chess_request`: -```rust -... -ChessRequest::Message(content) => { - // Earlier in this code, we define game_id as the source node. - let Some(game) = state.games.get_mut(game_id) else { - return Err(anyhow::anyhow!("no game with {game_id}")); - }; - game.messages.push((game_id.to_string(), content.to_string())); - Ok(()) -} -... -``` - -In `handle_local_request`, the app sends requests to other nodes. -Note, however, that requests to message `our`self don't really make sense — what should really happen is that the chess frontend performs a PUT request, or sends a message over a websocket, and the chess backend process turns that into a message request to the other player. -So instead of handling `Message` requests in `handle_local_request`, the process should reject or ignore them: - -```rust -ChessRequest::Message(_) => { - Ok(()) -} -``` - -Instead, the chess backend will handle a new kind of PUT request in `handle_http_request`, such that the local frontend can be used to send messages in games being played. - -This is the current (super gross!!) code for handling PUT requests in `handle_http_request`: -```rust -// on PUT: make a move -"PUT" => { - let Some(blob) = get_blob() else { - return http::send_response(http::StatusCode::BAD_REQUEST, None, vec![]); - }; - let blob_json = serde_json::from_slice::(&blob.bytes)?; - let Some(game_id) = blob_json["id"].as_str() else { - return http::send_response(http::StatusCode::BAD_REQUEST, None, vec![]); - }; - let Some(game) = state.games.get_mut(game_id) else { - return http::send_response(http::StatusCode::NOT_FOUND, None, vec![]); - }; - if (game.turns % 2 == 0 && game.white != our.node) - || (game.turns % 2 == 1 && game.black != our.node) - { - return http::send_response(http::StatusCode::FORBIDDEN, None, vec![]); - } else if game.ended { - return http::send_response(http::StatusCode::CONFLICT, None, vec![]); - } - let Some(move_str) = blob_json["move"].as_str() else { - return http::send_response(http::StatusCode::BAD_REQUEST, None, vec![]); - }; - let mut board = Board::from_fen(&game.board).unwrap(); - if !board.apply_uci_move(move_str) { - // reader note: can surface illegal move to player or something here - return http::send_response(http::StatusCode::BAD_REQUEST, None, vec![]); - } - // send the move to the other player - // check if the game is over - // if so, update the records - let Ok(msg) = Request::new() - .target((game_id, our.process.clone())) - .body(serde_json::to_vec(&ChessRequest::Move { - game_id: game_id.to_string(), - move_str: move_str.to_string(), - })?) - .send_and_await_response(5)? - else { - return Err(anyhow::anyhow!( - "other player did not respond properly to our move" - )); - }; - if serde_json::from_slice::(msg.body())? != ChessResponse::MoveAccepted { - return Err(anyhow::anyhow!("other player rejected our move")); - } - // update the game - game.turns += 1; - if board.checkmate() || board.stalemate() { - game.ended = true; - } - game.board = board.fen(); - // update state and return to FE - let body = serde_json::to_vec(&game)?; - save_chess_state(&state); - // return the game - http::send_response( - http::StatusCode::OK, - Some(HashMap::from([( - String::from("Content-Type"), - String::from("application/json"), - )])), - body, - ) -} -``` - -Let's modify this to handle more than just making moves. -Note that there's an implicit JSON structure enforced by the code above, where PUT requests from your frontend look like this: - -```json -{ - "id": "game_id", - "move": "e2e4" -} -``` - -An easy way to allow messages is to match on whether the key `"move"` is present, and if not, look for the key `"message"`. -This could also easily be codified as a Rust type and deserialized. - -Now, instead of assuming `"move"` exists, let's add a branch that handles the `"message"` case. -This is a modification of the code above: -```rust -// on PUT: make a move OR send a message -"PUT" => { - // ... same as the previous snippet ... - let Some(move_str) = blob_json["move"].as_str() else { - let Some(message) = blob_json["message"].as_str() else { - return http::send_response(http::StatusCode::BAD_REQUEST, None, vec![]); - }; - // handle sending message to another player - let Ok(_ack) = Request::new() - .target((game_id, our.process.clone())) - .body(serde_json::to_vec(&ChessRequest::Message(message.to_string()))?) - .send_and_await_response(5)? - else { - // Reader Note: handle a failed message send! - return Err(anyhow::anyhow!( - "other player did not respond properly to our message" - )); - }; - game.messages.push((our.node.clone(), message.to_string())); - let body = serde_json::to_vec(&game)?; - save_chess_state(&state); - // return the game - return http::send_response( - http::StatusCode::OK, - Some(HashMap::from([( - String::from("Content-Type"), - String::from("application/json"), - )])), - body, - ); - }; - // - // ... the rest of the move-handling code, same as previous snippet ... - // -} -``` - -That's it. -A simple demonstration of how to extend the functionality of a given process. -There are a few key things to keep in mind when doing this, if you want to build stable, maintainable, upgradable applications: - -- By adding chat, you changed the format of the "chess protocol" implicitly declared by this program. -If a user is running the old code, their version won't know how to handle the new `Message` request type you added. -**Depending on the serialization/deserialization strategy used, this might even create incompatibilities with the other types of requests.** -This is a good reason to use a serialization strategy that allows for "unknown" fields, such as JSON. -If you're using a binary format, you'll need to be more careful about how you add new fields to existing types. - -- It's *okay* to break backwards compatibility with old versions of an app, but once a protocol is established, it's best to stick to it or start a new project. -Backwards compatibility can always be achieved by adding a version number to the request/response type(s) directly. -That's a simple way to know which version of the protocol is being used and handle it accordingly. - -- By adding a `messages` field to the `Game` struct, you changed the format of the state that gets persisted. -If a user was running the previous version of this process, and upgrades to this version, the old state will fail to properly deserialize. -If you are building an upgrade to an existing app, you should always test that the new version can appropriately handle old state. -If you have many versions, you might need to make sure that state types from *any* old version can be handled. -Again, inserting a version number that can be deserialized from persisted state is a useful strategy. -The best way to do this depends on the serialization strategy used. diff --git a/src/chess_app/chess_app.md b/src/chess_app/chess_app.md deleted file mode 100644 index d06a037e..00000000 --- a/src/chess_app/chess_app.md +++ /dev/null @@ -1,4 +0,0 @@ -# In-Depth Guide: Chess App - -This guide will walk you through building a very simple chess app on Kinode. -The final result will look like [this](https://github.com/kinode-dao/kinode/tree/main/kinode/packages/chess): chess is in the basic runtime distribution so you can try it yourself. diff --git a/src/chess_app/chess_engine.md b/src/chess_app/chess_engine.md deleted file mode 100644 index 22cf9e4d..00000000 --- a/src/chess_app/chess_engine.md +++ /dev/null @@ -1,522 +0,0 @@ -# Chess Engine - -Chess is a good example for a Kinode application walk-through because: -1. The basic game logic is already readily available. - There are dozens of high-quality chess libraries across many languages that can be imported into a Wasm app that runs on Kinode. - We'll be using [pleco](https://github.com/pleco-rs/Pleco). -2. It's a multiplayer game, showing Kinode's p2p communications and ability to serve frontends -3. It's fun! - -In `my-chess/Cargo.toml`, which should be in the `my-chess/` process directory inside the `my-chess/` package directory, add `pleco = "0.5"` to your dependencies. -In your `my-chess/src/lib.rs`, replace the existing code with: - -```rust -use pleco::Board; -use kinode_process_lib::{await_message, call_init, println, Address}; - -wit_bindgen::generate!({ - path: "target/wit", - world: "process-v0", -}); - -call_init!(init); -fn init(our: Address) { - println!("started"); - - let my-chess_board = Board::start_pos().fen(); - - println!("my-chess_board: {my-chess_board}"); - - loop { - // Call await_message() to receive any incoming messages. - await_message().map(|message| { - if !message.is_request() { continue }; - println!( - "{our}: got request from {}: {}", - message.source(), - String::from_utf8_lossy(message.body()) - ); - }); - } -} -``` - -Now, you have access to a chess board and can manipulate it easily. - -The [pleco docs](https://github.com/pleco-rs/Pleco#using-pleco-as-a-library) show everything you can do using the pleco library. -But this isn't very interesting by itself! -Chess is a multiplayer game. -To make your app multiplayer, start by creating a persisted state for the chess app and a `body` format for sending messages to other nodes. - -The first step to creating a multiplayer or otherwise networked project is adjusting your `manifest.json` to specify what [capabilities](../system/process/capabilities.md) your process will grant. - -Go to `my-chess/manifest.json` and make sure your chess process is public and gets network access: -```json -[ - { - "process_name": "my-chess", - "process_wasm_path": "/my-chess.wasm", - "on_exit": "Restart", - "request_networking": true, - "request_capabilities": [], - "grant_capabilities": [], - "public": true - } -] -``` - -Now, in `my-chess/src/lib.rs` add the following simple Request/Response interface and persistable game state: -```rust -use serde::{Deserialize, Serialize}; -use std::collections::{HashMap, HashSet}; - -#[derive(Debug, Serialize, Deserialize)] -enum ChessRequest { - NewGame { white: String, black: String }, - Move { game_id: String, move_str: String }, - Resign(String), -} - -#[derive(Debug, Eq, PartialEq, Serialize, Deserialize)] -enum ChessResponse { - NewGameAccepted, - NewGameRejected, - MoveAccepted, - MoveRejected, -} - -/// -/// Our serializable state format. -/// -#[derive(Debug, Serialize, Deserialize)] -struct ChessState { - pub games: HashMap, -} - -#[derive(Clone, Debug, Serialize, Deserialize)] -struct Game { - /// the node with whom we are playing - pub id: String, - pub turns: u64, - /// a string representation of the board using FEN - pub board: String, - /// the white player's node id - pub white: String, - /// the black player's node id - pub black: String, - pub ended: bool, -} -``` - -Creating explicit `ChessRequest` and `ChessResponse` types is the easiest way to reliably communicate between two processes. -It makes message-passing very simple. -If you get a request, you can deserialize it to `ChessRequest` and ignore or throw an error if that fails. -If you get a response, you can do the same but with `ChessResponse`. -And every request and response that you send can be serialized in kind. -More advanced apps can take on different structures, but a top-level `enum` to serialize/deserialize and match on is usually a good idea. - -The `ChessState` `struct` shown above can also be persisted using the `set_state` and `get_state` commands exposed by Kinode's runtime. -Note that the `Game` `struct` here has `board` as a `String`. -This is because the `Board` type from pleco doesn't implement `Serialize` or `Deserialize`. -We'll have to convert it to a string using `fen()` before persisting it. -Then, you will convert it back to a `Board` with `Board::from_fen()` when you load it from state. - -The code below will contain a version of the `init()` function that creates an event loop and handles ChessRequests. -First, however, it's important to note that these types already bake in some assumptions about our "chess protocol". -Remember, requests can either expect a response, or be fired and forgotten. -Unless a response is expected, there's no way to know if a request was received or not. -In a game like chess, most actions have a logical response. -Otherwise, there's no way to easily alert the user that their counterparty has gone offline, or started to otherwise ignore our moves. -For the sake of the tutorial, there are three kinds of requests and only two expect a response. -In our code, the `NewGame` and `Move` requests will always await a response, blocking until they receive one (or the request times out). -`Resign`, however, will be fire-and-forget. -While a "real" game may prefer to wait for a response, it is important to let one player resign and thus clear their state *without* that resignation being "accepted" by a non-responsive player, so production-grade resignation logic is non-trivial. - -> An aside: when building consumer-grade peer-to-peer apps, you'll find that there are in fact very few "trivial" interaction patterns. -> Something as simple as resigning from a one-on-one game, which would be a single POST request in a client-frontend <> server-backend architecture, requires well-thought-out negotiations to ensure that both players can walk away with a clean state machine, regardless of whether the counterparty is cooperating. -> Adding more "players" to the mix makes this even more complex. -> To keep things clean, leverage the request/response pattern and the `context` field to store information about how to handle a given response, if you're not awaiting it in a blocking fashion. - -Below, you'll find the full code for the CLI version of the app. -You can build it and install it on a node using `kit`. -You can interact with it in the terminal, primitively, like so (assuming your first node is `fake.os` and second is `fake2.os`): -``` -m our@my-chess:my-chess:template.os '{"NewGame": {"white": "fake.os", "black": "fake2.os"}}' -m our@my-chess:my-chess:template.os '{"Move": {"game_id": "fake2.os", "move_str": "e2e4"}}' -``` -(If you want to make a more ergonomic CLI app, consider parsing `body` as a string, or better yet, writing [terminal scripts](../cookbook/writing_scripts.md) for various game actions.) - -As you read through the code, you might notice a problem with this app: there's no way to see your games! -A fun project would be to add a CLI command that shows you, in-terminal, the board for a given `game_id`. -But in the [next chapter](./frontend.md), we'll add a frontend to this app so you can see your games in a browser. - -`my-chess/Cargo.toml`: -```toml -[package] -name = "my-chess" -version = "0.1.0" -edition = "2021" - -[profile.release] -panic = "abort" -opt-level = "s" -lto = true - -[dependencies] -anyhow = "1.0" -bincode = "1.3.3" -kinode_process_lib = "0.9.0" -pleco = "0.5" -serde = { version = "1.0", features = ["derive"] } -serde_json = "1.0" -wit-bindgen = "0.24.0" - -[lib] -crate-type = ["cdylib"] - -[package.metadata.component] -package = "kinode:process" -``` - -`my-chess/src/lib.rs`: -```rust -use kinode_process_lib::{ - await_message, call_init, get_typed_state, println, set_state, Address, Message, NodeId, - Request, Response, -}; -use pleco::Board; -use serde::{Deserialize, Serialize}; -use std::collections::HashMap; - -// Boilerplate: generate the Wasm bindings for a Kinode app -wit_bindgen::generate!({ - path: "target/wit", - world: "process-v0", -}); - -// -// Our "chess protocol" request/response format. We'll always serialize these -// to a byte vector and send them over `body`. -// - -#[derive(Debug, Serialize, Deserialize)] -enum ChessRequest { - NewGame { white: String, black: String }, - Move { game_id: String, move_str: String }, - Resign(String), -} - -#[derive(Debug, Eq, PartialEq, Serialize, Deserialize)] -enum ChessResponse { - NewGameAccepted, - NewGameRejected, - MoveAccepted, - MoveRejected, -} - -/// -/// Our serializable state format. -/// -#[derive(Debug, Serialize, Deserialize)] -struct ChessState { - pub games: HashMap, -} - -#[derive(Clone, Debug, Serialize, Deserialize)] -struct Game { - /// the node with whom we are playing - pub id: String, - pub turns: u64, - /// a string representation of the board using FEN - pub board: String, - /// the white player's node id - pub white: String, - /// the black player's node id - pub black: String, - pub ended: bool, -} - -/// Helper function to serialize and save the process state. -fn save_chess_state(state: &ChessState) { - set_state(&bincode::serialize(&state.games).unwrap()); -} - -/// Helper function to deserialize the process state. Note that we use a helper function -/// from process_lib to fetch a typed state, which will return None if the state does -/// not exist OR fails to deserialize. In either case, we'll make an empty new state. -fn load_chess_state() -> ChessState { - match get_typed_state(|bytes| bincode::deserialize::>(bytes)) { - Some(games) => ChessState { games }, - None => ChessState { - games: HashMap::new(), - }, - } -} - -call_init!(init); -fn init(our: Address) { - // A little printout to show in terminal that the process has started. - println!("started"); - - // Grab our state, then enter the main event loop. - let mut state: ChessState = load_chess_state(); - main_loop(&our, &mut state); -} - -fn main_loop(our: &Address, state: &mut ChessState) { - loop { - // Call await_message() to receive any incoming messages. - // If we get a network error, make a print and throw it away. - // In a high-quality consumer-grade app, we'd want to explicitly handle - // this and surface it to the user. - match await_message() { - Err(send_error) => { - println!("got network error: {send_error:?}"); - continue; - } - Ok(message) => { - if let Err(e) = handle_request(&our, &message, state) { - println!("error while handling request: {e:?}"); - } - } - } - } -} - -/// Handle chess protocol messages from ourself *or* other nodes. -fn handle_request(our: &Address, message: &Message, state: &mut ChessState) -> anyhow::Result<()> { - // Throw away responses. We never expect any responses *here*, because for every - // chess protocol request, we *await* its response in-place. This is appropriate - // for direct node<>node comms, less appropriate for other circumstances... - if !message.is_request() { - return Err(anyhow::anyhow!("message was response")); - } - // If the request is from another node, handle it as an incoming request. - // Note that we can enforce the ProcessId as well, but it shouldn't be a trusted - // piece of information, since another node can easily spoof any ProcessId on a request. - // It can still be useful simply as a protocol-level switch to handle different kinds of - // requests from the same node, with the knowledge that the remote node can finagle with - // which ProcessId a given message can be from. It's their code, after all. - if message.source().node != our.node { - // Deserialize the request `body` to our format, and throw it away if it - // doesn't fit. - let Ok(chess_request) = serde_json::from_slice::(message.body()) else { - return Err(anyhow::anyhow!("invalid chess request")); - }; - handle_chess_request(&message.source().node, state, &chess_request) - } - // ...and if the request is from ourselves, handle it as our own! - // Note that since this is a local request, we *can* trust the ProcessId. - else { - // Here, we accept messages *from any local process that can message this one*. - // Since the manifest.json specifies that this process is *public*, any local process - // can "play chess" for us. - // - // If you wanted to restrict this privilege, you could check for a specific process, - // package, and/or publisher here, *or* change the manifest to only grant messaging - // capabilities to specific processes. - let Ok(chess_request) = serde_json::from_slice::(message.body()) else { - return Err(anyhow::anyhow!("invalid chess request")); - }; - handle_local_request(our, state, &chess_request) - } -} - -/// handle chess protocol messages from other nodes -fn handle_chess_request( - source_node: &NodeId, - state: &mut ChessState, - action: &ChessRequest, -) -> anyhow::Result<()> { - println!("handling action from {source_node}: {action:?}"); - - // For simplicity's sake, we'll just use the node we're playing with as the game id. - // This limits us to one active game per partner. - let game_id = source_node; - - match action { - ChessRequest::NewGame { white, black } => { - // Make a new game with source.node - // This will replace any existing game with source.node! - if state.games.contains_key(game_id) { - println!("resetting game with {game_id} on their request!"); - } - let game = Game { - id: game_id.to_string(), - turns: 0, - board: Board::start_pos().fen(), - white: white.to_string(), - black: black.to_string(), - ended: false, - }; - // Use our helper function to persist state after every action. - // The simplest and most trivial way to keep state. You'll want to - // use a database or something in a real app, and consider performance - // when doing intensive data-based operations. - state.games.insert(game_id.to_string(), game); - save_chess_state(&state); - // Send a response to tell them we've accepted the game. - // Remember, the other player is waiting for this. - Response::new() - .body(serde_json::to_vec(&ChessResponse::NewGameAccepted)?) - .send()?; - Ok(()) - } - ChessRequest::Move { move_str, .. } => { - // note: ignore their game_id, just use their node ID so they can't spoof it - // Get the associated game and respond with an error if - // we don't have it in our state. - let Some(game) = state.games.get_mut(game_id) else { - // If we don't have a game with them, reject the move. - Response::new() - .body(serde_json::to_vec(&ChessResponse::MoveRejected)?) - .send()?; - return Ok(()); - }; - // Convert the saved board to one we can manipulate. - let mut board = Board::from_fen(&game.board).unwrap(); - if !board.apply_uci_move(move_str) { - // Reject invalid moves! - Response::new() - .body(serde_json::to_vec(&ChessResponse::MoveRejected)?) - .send()?; - return Ok(()); - } - game.turns += 1; - if board.checkmate() || board.stalemate() { - game.ended = true; - } - // Persist state. - game.board = board.fen(); - save_chess_state(&state); - // Send a response to tell them we've accepted the move. - Response::new() - .body(serde_json::to_vec(&ChessResponse::MoveAccepted)?) - .send()?; - Ok(()) - } - ChessRequest::Resign(_) => { - // They've resigned. The sender isn't waiting for a response to this, - // so we don't need to send one. - if let Some(game) = state.games.get_mut(game_id) { - game.ended = true; - save_chess_state(&state); - } - Ok(()) - } - } -} - -/// Handle actions we are performing. Here's where we'll send_and_await various requests. -/// -/// Each send_and_await here just uses a 5-second timeout. Note that this isn't waiting -/// for the other *human* player to respond, but for the other *process* to respond. -/// Carefully consider your timeout strategy -- sometimes it makes sense to automatically -/// retry, but other times you'll want to surface the error to the user. -fn handle_local_request( - our: &Address, - state: &mut ChessState, - action: &ChessRequest, -) -> anyhow::Result<()> { - match action { - ChessRequest::NewGame { white, black } => { - // Create a new game. We'll enforce that one of the two players is us. - if white != &our.node && black != &our.node { - return Err(anyhow::anyhow!("cannot start a game without us!")); - } - let game_id = if white == &our.node { black } else { white }; - // If we already have a game with this player, throw an error. - if let Some(game) = state.games.get(game_id) { - if !game.ended { - return Err(anyhow::anyhow!("already have a game with {game_id}")); - } - } - // Send the other player a NewGame request - // The request is exactly the same as what we got from terminal. - // We'll give them 5 seconds to respond... - let Ok(Message::Response { ref body, .. }) = - Request::to((game_id, our.process.clone())) - .body(serde_json::to_vec(&action)?) - .send_and_await_response(5)? - else { - return Err(anyhow::anyhow!( - "other player did not respond properly to new game request" - )); - }; - // If they accept, create a new game — otherwise, error out. - if serde_json::from_slice::(body)? != ChessResponse::NewGameAccepted { - return Err(anyhow::anyhow!("other player rejected new game request!")); - } - // New game with default board. - let game = Game { - id: game_id.to_string(), - turns: 0, - board: Board::start_pos().fen(), - white: white.to_string(), - black: black.to_string(), - ended: false, - }; - state.games.insert(game_id.to_string(), game); - save_chess_state(&state); - Ok(()) - } - ChessRequest::Move { game_id, move_str } => { - // Make a move. We'll enforce that it's our turn. The game_id is the - // person we're playing with. - let Some(game) = state.games.get_mut(game_id) else { - return Err(anyhow::anyhow!("no game with {game_id}")); - }; - if (game.turns % 2 == 0 && game.white != our.node) - || (game.turns % 2 == 1 && game.black != our.node) - { - return Err(anyhow::anyhow!("not our turn!")); - } else if game.ended { - return Err(anyhow::anyhow!("that game is over!")); - } - let mut board = Board::from_fen(&game.board).unwrap(); - if !board.apply_uci_move(move_str) { - return Err(anyhow::anyhow!("illegal move!")); - } - // Send the move to the other player, then check if the game is over. - // The request is exactly the same as what we got from terminal. - // We'll give them 5 seconds to respond... - let Ok(Message::Response { ref body, .. }) = - Request::to((game_id, our.process.clone())) - .body(serde_json::to_vec(&action)?) - .send_and_await_response(5)? - else { - return Err(anyhow::anyhow!( - "other player did not respond properly to our move" - )); - }; - if serde_json::from_slice::(body)? != ChessResponse::MoveAccepted { - return Err(anyhow::anyhow!("other player rejected our move")); - } - game.turns += 1; - if board.checkmate() || board.stalemate() { - game.ended = true; - } - game.board = board.fen(); - save_chess_state(&state); - Ok(()) - } - ChessRequest::Resign(ref with_who) => { - // Resign from a game with a given player. - let Some(game) = state.games.get_mut(with_who) else { - return Err(anyhow::anyhow!("no game with {with_who}")); - }; - // send the other player an end game request — no response expected - Request::to((with_who, our.process.clone())) - .body(serde_json::to_vec(&action)?) - .send()?; - game.ended = true; - save_chess_state(&state); - Ok(()) - } - } -} -``` - -That's it! You now have a fully peer-to-peer chess game that can be played (awkwardly) through your Kinode terminal. - -In the [next chapter](./frontend.md), we'll add a frontend to this app so you can play it more easily. \ No newline at end of file diff --git a/src/chess_app/chess_home.png b/src/chess_app/chess_home.png deleted file mode 100644 index 0305606b..00000000 Binary files a/src/chess_app/chess_home.png and /dev/null differ diff --git a/src/chess_app/frontend.md b/src/chess_app/frontend.md deleted file mode 100644 index 90cb1851..00000000 --- a/src/chess_app/frontend.md +++ /dev/null @@ -1,314 +0,0 @@ -# Adding a Frontend - -Here, you'll add a web frontend to the code from the [previous section](./chess_engine.md). - -Creating a web frontend has two parts: -1. Altering the process code to serve and handle HTTP requests -2. Writing a webpage to interact with the process. -Here, you'll use React to make a single-page app that displays your current games and allows us to: create new games, resign from games, and make moves on the chess board. - -JavaScript and React development aren't in the scope of this tutorial, so you can find that code [here](https://github.com/kinode-dao/chess-ui). - -The important part of the frontend for the purpose of this tutorial is how to set up those pre-existing files to be built and installed by `kit`. -When files found in the `ui/` directory, if a `package.json` file is found with a `build:copy` field in `scripts`, `kit` will run that to build the UI (see [here](https://github.com/kinode-dao/chess-ui/blob/82419ea0e53e6d86d6dc6c8ed7f656c3ab51fdc8/package.json#L10)). -The `build:copy` in that file builds the UI and then places the resulting files into the `pkg/ui/` directory where they will be installed by `kit start-package`. -This allows your process to fetch them from the virtual filesystem, as all files in `pkg/` are mounted. -See the [VFS API overview](../apis/vfs.md) to see how to use files mounted in `pkg/`. -Additional UI dev info can be found [here](../apis/frontend_development.md). - -Get the chess UI files and place them in the proper place (next to `pkg/`): -```bash -# run in the top-level directory of your my-chess package -git clone https://github.com/kinode-dao/chess-ui ui -``` - -Chess will use the built-in HTTP server runtime module to serve a static frontend and receive HTTP requests from it. -You'll also use a WebSocket connection to send updates to the frontend when the game state changes. - -In `my-chess/src/lib.rs`, inside `init()`: -```rust -use kinode_process_lib::{http::server, homepage}; - -// add ourselves to the homepage -homepage::add_to_homepage("My Chess App", None, Some("/"), None); - -// create an HTTP server struct with which to manipulate `http-server:distro:sys` -let mut http-server = server::HttpServer::new(5); -let http_config = server::HttpBindingConfig::default(); - -// Serve the index.html and other UI files found in pkg/ui at the root path. -http-server - .serve_ui(&our, "ui", vec!["/"], http_config.clone()) - .expect("failed to serve ui"); - -// Allow HTTP requests to be made to /games; they will be handled dynamically. -http-server - .bind_http_path("/games", http_config.clone()) - .expect("failed to bind /games"); - -// Allow websockets to be opened at / (our process ID will be prepended). -http-server - .bind_ws_path("/", server::WsBindingConfig::default()) - .expect("failed to bind ws"); -``` - -The above code should be inserted into the `init()` function such that the frontend is served when the process starts. - -The `http` library in [process_lib](../process_stdlib/overview.md) provides a simple interface for serving static files and handling HTTP requests. -Use `serve_ui` to serve the static files included in the process binary, and `bind_http_path` to handle requests to `/games`. -`serve_ui` takes five arguments: the process `Address`, the name of the folder inside `pkg` that contains the `index.html` and other associated UI files, the path(s) on which to serve the UI (usually just `["/"]`), and the `HttpBindingConfig` to use. -See [process_lib docs](https://docs.rs/kinode_process_lib/latest/kinode_process_lib/) for more functions and documentation on their parameters. -These requests all serve HTTP that can only be accessed by a logged-in node user (the `true` parameter for `authenticated` in `HttpBindingConfig`) and can be accessed remotely (the `false` parameter for `local_only`). - -Requests on the `/games` path will arrive as requests to your process, and you'll have to handle them and respond. -To do this, add a branch to the main request-handling function that takes requests from *our* `http-server:distro:sys`. - -In `my-chess/src/lib.rs`, inside the part of `handle_request()` that handles local requests: -```rust -... - // if the message is from the HTTP server runtime module, we should handle it - // as an HTTP request and not a chess request - if message.source().process == "http-server:distro:sys" { - return handle_http_request(state, http-server, message); - } -... -``` - -Now, write the `handle_http_request` function to take incoming HTTP requests and return HTTP responses. -This will serve the same purpose as the `handle_local_request` function from the previous chapter, meaning that the frontend will produce actions and the backend will execute them. - -An aside: As a process dev, you should be aware that HTTP resources served in this way can be accessed by *other processes running on the same node*, regardless of whether the paths are authenticated or not. -This can be a security risk: if your app is handling sensitive actions from the frontend, a malicious app could make those API requests instead. -You should never expect users to "only install non-malicious apps" — instead, use a *secure subdomain* to isolate your app's HTTP resources from other processes. -See the [HTTP Server API](../apis/http_server.md) for more details. - -In `my-chess/src/lib.rs`: -```rust -/// Handle HTTP requests from our own frontend. -fn handle_http_request( - state: &mut ChessState, - http-server: &mut server::HttpServer, - message: &Message, -) -> anyhow::Result<()> { - let request = http-server.parse_request(message.body())?; - - // the HTTP server helper struct allows us to pass functions that - // handle the various types of requests we get from the frontend - http-server.handle_request( - request, - |incoming| { - // client frontend sent an HTTP request, process it and - // return an HTTP response - // these functions can reuse the logic from handle_local_request - // after converting the request into the appropriate format! - match incoming.method().unwrap_or_default() { - http::Method::GET => handle_get(state), - http::Method::POST => handle_post(state), - http::Method::PUT => handle_put(state), - http::Method::DELETE => handle_delete(state, &incoming), - _ => ( - server::HttpResponse::new(http::StatusCode::METHOD_NOT_ALLOWED), - None, - ), - } - }, - |_channel_id, _message_type, _message| { - // client frontend sent a websocket message - // we don't expect this! we only use websockets to push updates - }, - ); - - Ok(()) -} -``` - -Of course, we must now implement the `handle_get`, `handle_post`, `handle_put`, and `handle_delete` functions. -These will parse the incoming requests, convert them to our `ChessRequest` format, use the function defined in the last chapter to apply them to our state machine, and return the appropriate HTTP responses. - -```rust -/// On GET: return all active games -fn handle_get(state: &mut ChessState) -> (server::HttpResponse, Option) { - ( - server::HttpResponse::new(http::StatusCode::OK), - Some(LazyLoadBlob { - mime: Some("application/json".to_string()), - bytes: serde_json::to_vec(&state.games).expect("failed to serialize games!"), - }), - ) -} - -/// On POST: create a new game -fn handle_post(state: &mut ChessState) -> (server::HttpResponse, Option) { - let Some(blob) = get_blob() else { - return ( - server::HttpResponse::new(http::StatusCode::BAD_REQUEST), - None, - ); - }; - let Ok(blob_json) = serde_json::from_slice::(&blob.bytes) else { - return ( - server::HttpResponse::new(http::StatusCode::BAD_REQUEST), - None, - ); - }; - let Some(game_id) = blob_json["id"].as_str() else { - return ( - server::HttpResponse::new(http::StatusCode::BAD_REQUEST), - None, - ); - }; - - let player_white = blob_json["white"] - .as_str() - .unwrap_or(state.our.node.as_str()) - .to_string(); - let player_black = blob_json["black"].as_str().unwrap_or(game_id).to_string(); - - match handle_local_request( - state, - &ChessRequest::NewGame(NewGameRequest { - white: player_white, - black: player_black, - }), - ) { - Ok(game) => ( - server::HttpResponse::new(http::StatusCode::OK) - .header("Content-Type", "application/json"), - Some(LazyLoadBlob { - mime: Some("application/json".to_string()), - bytes: serde_json::to_vec(&game).expect("failed to serialize game!"), - }), - ), - Err(e) => ( - server::HttpResponse::new(http::StatusCode::BAD_REQUEST), - Some(LazyLoadBlob { - mime: Some("application/text".to_string()), - bytes: e.to_string().into_bytes(), - }), - ), - } -} - -/// On PUT: make a move -fn handle_put(state: &mut ChessState) -> (server::HttpResponse, Option) { - let Some(blob) = get_blob() else { - return ( - server::HttpResponse::new(http::StatusCode::BAD_REQUEST), - None, - ); - }; - let Ok(blob_json) = serde_json::from_slice::(&blob.bytes) else { - return ( - server::HttpResponse::new(http::StatusCode::BAD_REQUEST), - None, - ); - }; - - let Some(game_id) = blob_json["id"].as_str() else { - return ( - server::HttpResponse::new(http::StatusCode::BAD_REQUEST), - None, - ); - }; - let Some(move_str) = blob_json["move"].as_str() else { - return ( - server::HttpResponse::new(http::StatusCode::BAD_REQUEST), - None, - ); - }; - - match handle_local_request( - state, - &ChessRequest::Move(MoveRequest { - game_id: game_id.to_string(), - move_str: move_str.to_string(), - }), - ) { - Ok(game) => ( - server::HttpResponse::new(http::StatusCode::OK) - .header("Content-Type", "application/json"), - Some(LazyLoadBlob { - mime: Some("application/json".to_string()), - bytes: serde_json::to_vec(&game).expect("failed to serialize game!"), - }), - ), - Err(e) => ( - server::HttpResponse::new(http::StatusCode::BAD_REQUEST), - Some(LazyLoadBlob { - mime: Some("application/text".to_string()), - bytes: e.to_string().into_bytes(), - }), - ), - } -} - -/// On DELETE: end the game -fn handle_delete( - state: &mut ChessState, - request: &server::IncomingHttpRequest, -) -> (server::HttpResponse, Option) { - let Some(game_id) = request.query_params().get("id") else { - return ( - server::HttpResponse::new(http::StatusCode::BAD_REQUEST), - None, - ); - }; - match handle_local_request(state, &ChessRequest::Resign(game_id.to_string())) { - Ok(game) => ( - server::HttpResponse::new(http::StatusCode::OK) - .header("Content-Type", "application/json"), - Some(LazyLoadBlob { - mime: Some("application/json".to_string()), - bytes: serde_json::to_vec(&game).expect("failed to serialize game!"), - }), - ), - Err(e) => ( - server::HttpResponse::new(http::StatusCode::BAD_REQUEST), - Some(LazyLoadBlob { - mime: Some("application/text".to_string()), - bytes: e.to_string().into_bytes(), - }), - ), - } -} -``` - -Are you ready to play chess? -Almost there! -One more missing piece: the backend needs to send WebSocket updates to the frontend after each move in order to update the board without a refresh. -Since open channels are already tracked in `HttpServer`, you just need to send a push to each open channel when a move occurs. - -In `my-chess/src/lib.rs`, add a helper function: -```rust -fn send_ws_update(http-server: &mut server::HttpServer, game: &Game) { - http-server.ws_push_all_channels( - "/", - server::WsMessageType::Binary, - LazyLoadBlob { - mime: Some("application/json".to_string()), - bytes: serde_json::json!({ - "kind": "game_update", - "data": game, - }) - .to_string() - .into_bytes(), - }, - ) -} -``` - -Now, anywhere you receive an action from another node (in `handle_chess_request()`, for example), call `send_ws_update(&our, &game, &state.clients)?` to send an update to all connected clients. -A good place to do this is right after saving the updated state. -Local moves from the frontend will update on their own. - -Finally, add requests for `http-server` and `vfs` messaging capabilities to the `manifest.json`: -```json -... -"request_capabilities": [ - "http-server:distro:sys", - "vfs:distro:sys" -], -... -``` - -Continue to [Putting Everything Together](./putting_everything_together.md) to see the full code and screenshots of the app in action. diff --git a/src/chess_app/putting_everything_together.md b/src/chess_app/putting_everything_together.md deleted file mode 100644 index 3efb771d..00000000 --- a/src/chess_app/putting_everything_together.md +++ /dev/null @@ -1,34 +0,0 @@ -# Putting Everything Together - -After adding a frontend in the previous chapter, your chess game is ready to play. - -Hopefully, you've been using `kit build ` to test the code as the tutorial has progressed. -If not, do so now in order to get a compiled package you can install onto a node. - -Next, use `kit start-package -p ` to install the package. -You should see the printout you added to `init()` in your terminal: `my-chess:my-chess:template.os: start`. - -Remember that you determine the process names via the `manifest.json` file inside `/pkg`, and the package & publisher name from `metadata.json` located at the top level of the project. -Open your chess frontend by navigating to your node's URL (probably something like `http://localhost:8080`), and use the names you chose as the path. -For example, if your chess process name is `my-chess`, and your package is named `my-chess`, and your publisher name is `template.os` (the default value), you would navigate to `http://localhost:8080/my-chess:my-chess:template.os`. - -You should see something like this: -![chess frontend](./chess_home.png) - -To try it out, boot up another node, execute the `kit start-package` command, and invite your new node to a game. -Presto! - -This concludes the main Chess tutorial. -If you're interested in learning more about how to write Kinode processes, there are several great options to extend the app: - -- Consider how to handle network errors and surface those to the user -- Add game tracking to the processes state, such that players can see their history -- Consider what another app might look like that uses the chess engine as a library. -Alter the process to serve this use case, or add another process that can be spawned to do such a thing. - -There are also extensions to this tutorial which dive into specific use cases which make the most of Kinode: - -- [Chat](./chat.md) -- [more coming soon](#) - -The full code is available [here](https://github.com/kinode-dao/kinode/tree/main/kinode/packages/chess). \ No newline at end of file diff --git a/src/chess_app/setup.md b/src/chess_app/setup.md deleted file mode 100644 index 76126b82..00000000 --- a/src/chess_app/setup.md +++ /dev/null @@ -1,11 +0,0 @@ -# Environment Setup - -To prepare for this tutorial, follow the environment setup guide [here](../my_first_app/chapter_1.md), i.e. [start a fake node](../my_first_app/chapter_1.md#booting-a-fake-kinode-node) and then, in another terminal, run: -``` -kit new my-chess --template blank -cd my-chess -kit b -kit start-package -``` - -Once you have the template app installed and can see it running on your testing node, continue to the next chapter... diff --git a/src/cookbook/cookbook.md b/src/cookbook/cookbook.md deleted file mode 100644 index 48c66c47..00000000 --- a/src/cookbook/cookbook.md +++ /dev/null @@ -1,5 +0,0 @@ -# Cookbook Overview - -The Cookbook is a collection of how-tos for common programming techniques that may be useful for the Kinode developer. -The entries include a basic explanation as well as some bare bones sample code to illustrate how you might use the technique. -Think of them as individual recipes that can be combined to form the outline for any variety of useful, interesting applications. diff --git a/src/cookbook/http_authentication.md b/src/cookbook/http_authentication.md deleted file mode 100644 index 2c17e9b3..00000000 --- a/src/cookbook/http_authentication.md +++ /dev/null @@ -1 +0,0 @@ -# HTTP Authentication diff --git a/src/cookbook/p2p_chat.md b/src/cookbook/p2p_chat.md deleted file mode 100644 index 0d514bfd..00000000 --- a/src/cookbook/p2p_chat.md +++ /dev/null @@ -1 +0,0 @@ -TODO: Ben diff --git a/src/cookbook/payment_rails.md b/src/cookbook/payment_rails.md deleted file mode 100644 index 4dcd9a05..00000000 --- a/src/cookbook/payment_rails.md +++ /dev/null @@ -1 +0,0 @@ -TODO: Ben (post payment rails addition) diff --git a/src/cookbook/stream_frontend_updates.md b/src/cookbook/stream_frontend_updates.md deleted file mode 100644 index c4e47473..00000000 --- a/src/cookbook/stream_frontend_updates.md +++ /dev/null @@ -1 +0,0 @@ -TODO: Will diff --git a/src/cookbook/system_features.md b/src/cookbook/system_features.md deleted file mode 100644 index 9fb6c1bf..00000000 --- a/src/cookbook/system_features.md +++ /dev/null @@ -1,6 +0,0 @@ -Todo: Ben/Marcus - -Include: - how to use context field - how to use metadata field - how to handle timeouts in messages diff --git a/src/cookbook/use_app_apis.md b/src/cookbook/use_app_apis.md deleted file mode 100644 index bd112249..00000000 --- a/src/cookbook/use_app_apis.md +++ /dev/null @@ -1,3 +0,0 @@ -# Using WIT APIs - -TODO diff --git a/src/cookbook/websocket_authentication.md b/src/cookbook/websocket_authentication.md deleted file mode 100644 index 0fb00573..00000000 --- a/src/cookbook/websocket_authentication.md +++ /dev/null @@ -1 +0,0 @@ -# WebSocket Authentication diff --git a/src/cookbook/zk_with_sp1.md b/src/cookbook/zk_with_sp1.md deleted file mode 100644 index 572d148d..00000000 --- a/src/cookbook/zk_with_sp1.md +++ /dev/null @@ -1,237 +0,0 @@ -# ZK proofs with SP1 - -**Warning: This document is known to be out-of-date as of November 14, 2024. - Proceed with caution.** - -Zero-knowledge proofs are an exciting new tool for decentralize applications. -Thanks to [SP1](https://github.com/succinctlabs/sp1), you can prove a Rust program with an extremely easy to use open-source library. -There are a number of other ZK proving systems both in production and under development, which can also be used inside the Kinode environment, but this tutorial will focus on SP1. - -### Start - -In a terminal window, start a fake node to use for development of this app. -```bash -kit boot-fake-node -``` - -In another terminal, create a new app using [kit](../kit/kit-dev-toolkit.md). -Use the fibonacci template, which can then be modified to calculate fibonacci numbers in a *provably correct* way. -```bash -kit new my-zk-app -t fibonacci -cd my-zk-app -kit bs -``` - -Take note of the basic fibonacci program in the template. -The program presents a request/response pattern where a requester asks for the nth fibonacci number, and the process calculates and returns it. -This can be seen in action by running the following command in the fake node's terminal: -```bash -m our@my-zk-app:my-zk-app:template.os -a 5 '{"Number": 10}' -``` -(Change the package name to whatever you named your app + the publisher node as assigned in `metadata.json`.) - -You should see a print from the process that looks like this, and a returned JSON response that the terminal prints: -``` -my-zk-app: fibonacci(10) = 55; 375ns -{"Number":55} -``` - -### Cross-network computation - -From the template, you have a program that can be used across the Kinode network to perform a certain computation. -If the template app here has the correct capabilities, other nodes will be able to message it and receive a response. -This can be seen in action by booting another fake node (while keeping the first one open) and sending the fibonacci program a message: -``` -# need to set a custom name and port so as not to overlap with first node -kit boot-fake-node -p 8081 --fake-node-name fake2.os -# wait for the node to boot -m fake.os@my-zk-app:my-zk-app:template.os -a 5 '{"Number": 10}' -``` -(Replace the target node ID with the first fake node, which by default is `fake.os`) - -You should see `{"Number":55}` in the terminal of `fake2.os`! -This reveals a fascinating possibility: with Kinode, one can build p2p services accessible to any node on the network. -However, the current implementation of the fibonacci program is not provably correct. -The node running the program could make up a number -- without doing the work locally, there's no way to verify the result. -ZK proofs can solve this problem. - -### Introducing the proof - -To add ZK proofs to this simple fibonacci program, you can use the [SP1](https://github.com/succinctlabs/sp1) library to write a program in Rust, then produce proofs against it. - -First, add the SP1 dependency to the `Cargo.toml` file for `my-zk-app`: -```toml -[dependencies] -... -sp1-core = { git = "https://github.com/succinctlabs/sp1.git" } -... -``` - -Now follow the [SP1 install steps](https://succinctlabs.github.io/sp1/getting-started/install.html) to get the tooling for constructing a provable program. -After installing you should be able to run -``` -cargo prove new fibonacci -``` -and navigate to a project, which conveniently contains a fibonacci function example. -Modify it slightly to match what our fibonacci program does. -You can more or less copy-and-paste the fibonacci function from your Kinode app to the `program/src/main.rs` file in the SP1 project. -It'll look like this: -```rust -#![no_main] -sp1_zkvm::entrypoint!(main); - -pub fn main() { - let n = sp1_zkvm::io::read::(); - if n == 0 { - sp1_zkvm::io::write(&0); - return; - } - let mut a: u128 = 0; - let mut b: u128 = 1; - let mut sum: u128; - for _ in 1..n { - sum = a + b; - a = b; - b = sum; - } - sp1_zkvm::io::write(&b); -} -``` - -Now, use SP1's `prove` tool to build the ELF that will actually be executed when the process get a fibonacci request. -Run this inside the `program` dir of the SP1 project you created: -```bash -cargo prove build -``` - -Next, take the generated ELF file from `program/elf/riscv32im-succinct-zkvm-elf` and copy it into the `pkg` dir of your *Kinode* app. -Go back to your Kinode app code and include this file as bytes so the process can execute it in the SP1 zkVM: -```rust -const FIB_ELF: &[u8] = include_bytes!("../../pkg/riscv32im-succinct-zkvm-elf"); -``` - -### Building the app - -Now, this app can use this circuit to not only calculate fibonacci numbers, but include a proof that the calculation was performed correctly! -The subsequent proof can be serialized and shared across the network with the result. -Take a moment to imagine the possibilities, then take a look at the full code example below: - -Some of the code from the original fibonacci program is omitted for clarity, and functionality for verifying proofs our program receives from others has been added. - -```rust -use kinode_process_lib::{println, *}; -use serde::{Deserialize, Serialize}; -use sp1_core::{utils::BabyBearBlake3, SP1ProofWithIO, SP1Prover, SP1Stdin, SP1Verifier}; - -/// our circuit! -const FIB_ELF: &[u8] = include_bytes!("../../pkg/riscv32im-succinct-zkvm-elf"); - -wit_bindgen::generate!({ - path: "wit", - world: "process", -}); - -#[derive(Debug, Serialize, Deserialize)] -enum FibonacciRequest { - /// Send this locally to ask a peer for a proof - ProveIt { target: NodeId, n: u32 }, - /// Send this to a peer's fibonacci program - Number(u32), -} - -#[derive(Debug, Serialize, Deserialize)] -enum FibonacciResponse { - /// What we return to the local request - Proven(u128), - /// What we get from a remote peer - Proof, // bytes in message blob -} - -/// PROVE the nth Fibonacci number -/// since we are using u128, the maximum number -/// we can calculate is the 186th Fibonacci number -/// return the serialized proof -fn fibonacci_proof(n: u32) -> Vec { - let mut stdin = SP1Stdin::new(); - stdin.write(&n); - let proof = SP1Prover::prove(FIB_ELF, stdin).expect("proving failed"); - println!("succesfully generated and verified proof for fib({n})!"); - serde_json::to_vec(&proof).unwrap() -} - -fn handle_message(our: &Address) -> anyhow::Result<()> { - let message = await_message()?; - // we only handle requests directly -- responses are awaited in place. - // you can change this by using send() instead of send_and_await_response() - // in order to make this program more fluid and less blocking. - match serde_json::from_slice(message.body())? { - FibonacciRequest::ProveIt { target, n } => { - // we only accept this from our local node - if message.source().node() != our.node() { - return Err(anyhow::anyhow!("got a request from a non-local node!")); - } - // ask the target to do it for us - let res = Request::to(Address::new( - target, - (our.process(), our.package(), our.publisher()), - )) - .body(serde_json::to_vec(&FibonacciRequest::Number(n))?) - .send_and_await_response(30)??; - let Ok(FibonacciResponse::Proof) = serde_json::from_slice(res.body()) else { - return Err(anyhow::anyhow!("got a bad response!")); - }; - let proof = res - .blob() - .ok_or_else(|| anyhow::anyhow!("no proof in response"))? - .bytes; - // verify the proof - let mut proof: SP1ProofWithIO = serde_json::from_slice(&proof)?; - SP1Verifier::verify(FIB_ELF, &proof).map_err(|e| anyhow::anyhow!("{e:?}"))?; - // read result from proof - let output = proof.stdout.read::(); - // send response containing number - Response::new() - .body(serde_json::to_vec(&FibonacciResponse::Proven(output))?) - .send()?; - } - FibonacciRequest::Number(n) => { - // handle a remote request to prove a number - let proof = fibonacci_proof(n); - // send the proof back to the requester - Response::new() - .body(serde_json::to_vec(&FibonacciResponse::Proof)?) - .blob_bytes(proof) - .send()?; - } - } - Ok(()) -} - -call_init!(init); -fn init(our: Address) { - println!("fibonacci: begin"); - - loop { - match handle_message(&our) { - Ok(()) => {} - Err(e) => { - println!("fibonacci: error: {:?}", e); - } - }; - } -} -``` - -### Test it out - -Install this app on two nodes -- they can be the fake `kit` nodes from before, or real ones on the network. -Next, send a message from one to the other, asking it to generate a fibonacci proof! -``` -m our@my-zk-app:my-zk-app:template.os -a 30 '{"ProveIt": {"target": "fake.os", "n": 10}}' -``` -As usual, set the process ID to what you used, and set the `target` JSON value to the other node's name. -Try a few different numbers -- see if you can generate a timeout (it's set at 30 seconds now, both in the terminal command and inside the app code). -If so, the power of this proof system is demonstrated: a user with little compute can ask a peer to do some work for them and quickly verify it! - -In just over 100 lines of code, you have written a program that can create, share across the network, and verify ZK proofs. -Use this as a blueprint for similar programs to get started using ZK proofs in a brand new p2p environment! diff --git a/src/getting_started/design_philosophy.md b/src/getting_started/design_philosophy.md deleted file mode 100644 index e8f97774..00000000 --- a/src/getting_started/design_philosophy.md +++ /dev/null @@ -1,53 +0,0 @@ -# Design Philosophy - -The following is a high-level overview of Kinode's design philosophy, along with the rationale for fundamental design choices. - -### Decentralized Software Requires a Shared Computing Environment - -A single shared computing environment enables software to coordinate directly between users, services, and other pieces of software in a common language. -Therefore, the best way to enable decentralized software is to provide an easy-to-use, general purpose node (that can run on anything from laptops to data centers) that runs the same operating system as all other nodes on the network. -This environment must integrate with existing protocols, blockchains, and services to create a new set of protocols that operate peer-to-peer within the node network. - -### Decentralization is Broad - -A wide array of companies and services benefit from some amount of decentralized infrastructure, even those operating in a largely centralized context. -Additionally, central authority and centralized data are often essential to the proper function of a particular service, including those with decentralized properties. -The Kinode environment must be flexible enough to serve the vast majority of the decentralization spectrum. - -### Blockchains are not Databases - -To use blockchains as mere databases would negate their unique value. -Blockchains are consensus tools, and exist in a spectrum alongside other consensus strategies such as Raft, lockstep protocols, CRDTs, and simple gossip. -All of these are valid consensus schemes, and peer-to-peer software, such as that built on Kinode, must choose the correct strategy for a particular task, program, or application. - -### Decentralized Software Outcompetes Centralized Software through Permissionlessness and Composability - -Therefore, any serious decentralized network must identify and prioritize the features that guarantee permissionless and composable development. -Those features include: - -- a persistent software environment (software can run forever once deployed) -- client diversity (more actors means fewer monopolies) -- perpetual backwards-compatibility -- a robust node network that ensures individual ownership of software and data - -### Decentralized Software Requires Decentralized Governance - -The above properties are achieved by governance. -Successful protocols launched on Kinode will be ones that decentralize their governance in order to maintain these properties. -Kinode believes that systems that don't proactively specify their point of control will eventually centralize, even if unintentionally. -The governance of Kinode itself must be designed to encourage decentralization, playing a role in the publication and distribution of userspace software protocols. -In practice, this looks like an on-chain permissionless App Store. - -### Good Products Use Existing Tools - -Kinode is a novel combination of existing technologies, protocols, and ideas. -Our goal is not to create a new programming language or consensus algorithm, but to build a new execution environment that integrates the best of existing tools. -Our current architecture relies on the following systems: - -- ETH: a trusted execution layer -- Rust: a performant, expressive, and popular programming language -- Wasm: a portable, powerful binary format for executable programs -- Wasmtime: a standalone Wasm runtime - -In addition, Kinode is inspired by the [Bytecode Alliance](https://bytecodealliance.org/) and their vision for secure, efficient, and modular software. -Kinode makes extensive use of their tools and standards. diff --git a/src/getting_started/getting_started.md b/src/getting_started/getting_started.md deleted file mode 100644 index 635e9655..00000000 --- a/src/getting_started/getting_started.md +++ /dev/null @@ -1,21 +0,0 @@ -# The Kinode Book - -Kinode is a decentralized operating system, peer-to-peer app framework, and node network designed to simplify the development and deployment of decentralized applications. -It is also a _sovereign cloud computer_, in that Kinode can be deployed anywhere and act as a server controlled by anyone. -Ultimately, Kinode facilitates the writing and distribution of software that runs on privately-held, personal server nodes or node clusters. - -You are reading the Kinode Book, which is a technical document targeted at developers. - -[Read the Kinode Whitepaper here.](https://kino.casa/whitepaper.pdf) - -If you're a non-technical user: - -- Learn about Kinode at the [Kinode blog](https://kinode.org/blog). -- Spin up a hosted node at [Valet](https://valet.uncentered.systems). -- [Follow us on X](https://x.com/intent/follow?screen_name=Kinode). -- Join the conversation on [our Discord](https://discord.gg/mYDj74NkfP) or [Telegram](https://t.me/KinodeOS). - -If you're a developer: - -- Get your hands dirty with the [Quick Start](../getting_started/quick_start.md), or the more detailed [My First Kinode Application](../my_first_app/build_and_deploy_an_app.md) tutorial. -- Learn how to boot a Kinode locally in the [Installation](../getting_started/install.md) section. diff --git a/src/getting_started/install.md b/src/getting_started/install.md deleted file mode 100644 index 7f0e666d..00000000 --- a/src/getting_started/install.md +++ /dev/null @@ -1,159 +0,0 @@ -# Installation - -This section will teach you how to get the Kinode core software, required to run a live node. -After acquiring the software, you can learn how to run it and [Join the Network](./login.md). - -- If you are just interested in starting development as fast as possible, skip to [My First Kinode Application](../my_first_app/build_and_deploy_an_app.md). -- If you want to run a Kinode without managing it yourself, use the [Valet](https://valet.uncentered.systems) hosted service. -- If you want to make edits to the Kinode core software, see [Build From Source](#option-3-build-from-source). - -## Option 1: Download Binary (Recommended) - -Kinode core distributes pre-compiled binaries for MacOS and Linux Debian derivatives, like Ubuntu. - -First, get the software itself by downloading a [precompiled release binary](https://github.com/kinode-dao/kinode/releases/latest). -Choose the correct binary for your particular computer architecture and OS. -There is no need to download the `simulation-mode` binary — it is used behind the scenes by [`kit`](../kit/boot-fake-node.md). -Extract the `.zip` file: the binary is inside. - -Note that some operating systems, particularly Apple, may flag the download as suspicious. - -### Apple - -First, attempt to run the binary, which Apple will block. -Then, go to `System Settings > Privacy and Security` and click to `Open Anyway` for the `kinode` binary: - -![Apple unknown developer](../assets/apple-unknown-developer.png) - -## Option 2: Docker - -Kinode can also be run using Docker. -MacOS and Debian derivatives of Linux, like Ubuntu, are supported. -Windows may work but is not officially supported. - -### Installing Docker - -First, install Docker. -Instructions will be different depending on your OS, but it is recommended to follow [the method outlined in the official Docker website.](https://docs.docker.com/get-docker/) - -If you are using Linux, make sure to perform any post-install necessary afterwards. -[The official Docker website has optional post-install instructions.](https://docs.docker.com/engine/install/linux-postinstall/) - -### Docker Image - -The image expects a volume mounted at `/kinode-home`. -This volume may be empty or may contain another Kinode's data. -It will be used as the home directory of your Kinode. -Each volume is unique to each Kinode. -If you want to run multiple Kinodes, create multiple volumes. - -The image includes EXPOSE directives for TCP port `8080` and TCP port `9000`. -Port `8080` is used for serving the Kinode web dashboard over HTTP, and it may be mapped to a different port on the host. -Port `9000` is optional and is only required for a direct node. - -If you are running a direct node, you **must** map port `9000` to the same port on the host and on your router. -Otherwise, your Kinode will not be able to connect to the rest of the network. - -Run the following command to create a volume: - -```bash -# Replace this variable with your node's intended name -export NODENAME=helloworld.os - -docker volume create kinode-${NODENAME} -``` - -Then run the following command to create the container. -Replace `kinode-${NODENAME}` with the name of your volume if you prefer. -To map the port to a different port (for example, `80` or `6969`), change `8080:8080` to `PORT:8080`, where `PORT` is the post on the host machine. - -```bash -docker run -p 8080:8080 --rm -it --name kinode-${NODENAME} \ - --mount type=volume,source=kinode-${NODENAME},destination=/kinode-home \ - nick1udwig/kinode -``` - -which will launch your Kinode container attached to the terminal. -Alternatively, you can run it detached: - -```bash -docker run -p 8080:8080 --rm -dt --name kinode-${NODENAME} \ - --mount type=volume,source=kinode-${NODENAME},destination=/kinode-home \ - nick1udwig/kinode -``` - -Check the status of your Docker processes with `docker ps`. -To start and stop the container, use `docker start kinode-${NODENAME}` or `docker stop kinode-${NODENAME}`. - -As long as the volume is not deleted, your data remains intact upon removal or stop. -If you need further help with Docker, [access the official Docker documentation here](https://docs.docker.com/manuals/). - -## Option 3: Build From Source - -You can compile the binary from source using the following instructions. -This is only recommended if: - -1. The [pre-compiled binaries](#download-binary) don't work on your system and you can't use [Docker](#docker) for some reason, or -2. You need to make changes to the Kinode core source. - -### Acquire Dependencies - -If your system doesn't already have `cmake` and OpenSSL, download them: - -#### Linux - -```bash -sudo apt-get install cmake libssl-dev -``` - -#### Mac - -```bash -brew install cmake openssl -``` - -### Acquire Rust and various tools - -Install Rust and some `cargo` tools, by running the following in your terminal: - -```bash -curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -cargo install wasm-tools -rustup install nightly -rustup target add wasm32-wasip1 --toolchain nightly -cargo install cargo-wasi -``` - -For more information, or debugging, see the [Rust lang install page](https://www.rust-lang.org/tools/install). - -Kinode uses the stable build of Rust, but the Wasm processes use the **nightly** build of Rust.. -You will want to run the command `rustup update` on a regular basis to keep your version of the language current, especially if you run into issues compiling the runtime down the line. - -You will also need to [install NPM](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) in order to build the Wasm processes that are bundled with the core binary. - -### Acquire Kinode core - -Clone and set up the repository: - -```bash -git clone https://github.com/kinode-dao/kinode.git -``` - -Build the packages that are bundled with the binary: -```bash -cargo run -p build-packages -``` - -Build the binary: - -```bash -# OPTIONAL: --release flag -cargo build -p kinode -``` - -The resulting binary will be at path `kinode/target/debug/kinode`. -(Note that this is the binary crate inside the `kinode` workspace.) - -You can also build the binary with the `--release` flag. -Building without `--release` will produce the binary significantly faster, as it does not perform any optimizations during compilation, but the node will run much more slowly after compiling. -The release binary will be at path `kinode/target/release/kinode`. diff --git a/src/getting_started/kimap.md b/src/getting_started/kimap.md deleted file mode 100644 index ff3812dd..00000000 --- a/src/getting_started/kimap.md +++ /dev/null @@ -1,104 +0,0 @@ -# Kimap and KNS - -Kimap is an onchain namespace for the Kinode operating system. -It serves as the base-level shared global state that all nodes use to share critical signaling data with the entire network. -Kimap is organized as a hierarchical path system and has mutable and immutable keys. - -Historically, discoverability of both *peers* and *content* has been a major barrier for peer-to-peer developers. -Discoverability can present both social barriers (finding a new user on a game or chat) and technical obstacles (automatically acquiring networking information for a particular username). -Many solutions have been designed to address this problem, but so far, the ``devex'' (developer experience) of deploying centralized services has continued to outcompete the p2p discoverability options available. -Kimap aims to change this by providing a single, shared, onchain namespace that can be used to resolve to arbitrary elements of the Kinode network. - -1. All keys are strings containing exclusively characters 0-9, a-z (lowercase), - (hyphen) and are at maximum 63 characters long. -2. A key may be one of two types, a name-key or a data-key. -3. Every name-key may create sub-entries directly beneath it. -4. Every name-key is an ERC-721[^1] NFT (non-fungible token), with a connected token-bound account[^2] with a counterfactual address. -5. The implementation of the token-bound account may be set when a name-key is created. -6. If the parent entry of a name-key has a token-bound account implementation set (a "gene"), then the name-key will automatically inherit this implementation. -7. Every name-key may inscribe data in data-keys directly beneath it. -8. A data-key may be mutable (a "note", prepended with `~`) or immutable (a "fact", prepended with `!`). - -[^1]: https://eips.ethereum.org/EIPS/eip-721 -[^2]: https://ercs.ethereum.org/ERCS/erc-6551 - -See the Kinode whitepaper for a full specification which goes into detail regarding token-bound accounts, sub-entry management, the use of data keys, and protocol extensibility. - -Kimap is tightly integrated into the operating system. At the runtime level, networking identities are verified against the kimap namespace. -In userspace, programs such as the App Store make use of kimap by storing and reading data from it to define global state, such as apps available for download. - -## KNS: Kinode Name System - -One of the most important features of a peer-to-peer network is the ability to maintain a unique and persistent identity. -This identity must be self-sovereign, unforgeable, and easy to discover by peers. -Kinode uses a PKI (public-key infrastructure) that runs *within* kimap to achieve this. -It should be noted that, in our system, the concepts of `domain`, `identity`, and `username` are identical and interchangeable. - -Also important to understanding KNS identities is that other onchain identity protocols can be absorbed and supported by KNS. -The KNS is not an attempt at replacing or competing with existing onchain identity primitives such as ENS and Lens. -This has already been done for ENS protocol. - -Kinode names are registered by a wallet and owned in the form of an NFT like any other kimap namespace entry. -They contain metadata necessary to cover both: - -- **Domain provenance** - to demonstrate that the NFT owner has provenance of a given Kinode identity. -- **Domain resolution** - to be able to route messages to a given identity on the Kinode network. - -It's easy enough to check for provenance of a given Kinode identity. -If you have a Kinode domain, you can prove ownership by signing a message with the wallet that owns the domain. -However, to effectively use your Kinode identity as a domain name for your personal server, KNS domains have routing information, similar to a DNS record, that points to an IP address. - -### Domain Resolution - -A KNS identity can either be `direct` or `indirect`. -When users first boot a node, they may decide between these two types as they create their initial identity. -Direct nodes share their literal IP address and port in their metadata, allowing other nodes to message them directly. -Again, this is similar to registering a WWW domain name and pointing it at your web server. -However, running a direct node is both technically demanding (you must maintain the ability of your machine to be accessed remotely) and a security risk (you must open ports on the server to the public internet). -Therefore, indirect nodes are the best choice for the majority of users that choose to run their own node. - -Instead of sharing their IP and port, indirect nodes simply post a list of _routers_ onchain. -These routers are other _direct_ nodes that have agreed to forward messages to indirect nodes. -When a node wants to send a message to an indirect node, it first finds the node onchain, and then sends the message to one of the routers listed in the node's metadata. -The router is responsible for forwarding the message to the indirect node and similarly forwarding messages from that node back to the network at large. - -### Specification Within Kimap - -The definition of a node identity in the KNS protocol is any kimap entry that has: - -1. A `~net-key` note AND -2. Either: - a. A `~routers` note OR - b. An `~ip` note AND at least one of: - - `~tcp-port` note - - `~udp-port` note - - `~ws-port` note - - `~wt-port` note - -Direct nodes are those that publish an `~ip` and one or more of the port notes. -Indirect nodes are those that publish `~routers`. - -The data stored at `~net-key` must be 32 bytes corresponding to an Ed25519 public key. -This is a node's signing key which is used across a variety of domains to verify ownership, including in the end-to-end encrypted networking protocol between nodes. -The owner of a namespace entry/node identity may rotate this key at any time by posting a transaction to kimap mutating the data stored at `~net-key`. - -The bytes at a `~routers` entry must parse to an array of UTF-8 strings. -These strings should be node identities. -Each node in the array is treated by other participants in the networking protocol as a router for the parent entry. -Routers should themselves be direct nodes. -If a string in the array is not a valid node identity, or it is a valid node identity but not a direct one, that router will not be used by the networking protocol. -Further discussion of the networking protocol specification can be found [here](../system/networking_protocol.md). - -The bytes at an `~ip` entry must be either 4 or 16 big-endian bytes. -A 4-byte entry represents a 32-bit unsigned integer and is interpreted as an IPv4 address. -A 16-byte entry represents a 128-bit unsigned integer and is interpreted as an IPv6 address. - -Lastly, the bytes at any of the following port entries must be 2 big-endian bytes corresponding to a 16-bit unsigned integer: - -1. `~tcp-port` sub-entry -2. `~udp-port` sub-entry -3. `~ws-port` sub-entry -4. `~wt-port` sub-entry - -These integers are translated to port numbers. -In practice, port numbers used are between 9000 and 65535. -Ports between 8000-8999 are usually saved for HTTP server use. \ No newline at end of file diff --git a/src/getting_started/login.md b/src/getting_started/login.md deleted file mode 100644 index 2d60cf77..00000000 --- a/src/getting_started/login.md +++ /dev/null @@ -1,193 +0,0 @@ -# Join the Network - -This page discusses joining the network with a locally-run Kinode. -To instead join with a hosted node, see [Valet](https://valet.uncentered.systems). - -These directions are particular to the Kinode beta release. -Kinode is in active development on Optimism. - -## Starting the Kinode - -Start a Kinode using the binary acquired in the [previous section](./install.md). -Locate the binary on your system (e.g., if you built source yourself, the binary will be in the repository at `./kinode/target/debug/kinode` or `./kinode/target/release/kinode`). -Print out the arguments expected by the binary: - -``` -$ ./kinode --help -A General Purpose Sovereign Cloud Computing Platform - -Usage: kinode [OPTIONS] - -Arguments: - Path to home directory - -Options: - -p, --port - Port to bind [default: first unbound at or above 8080] - --ws-port - Kinode internal WebSockets protocol port [default: first unbound at or above 9000] - --tcp-port - Kinode internal TCP protocol port [default: first unbound at or above 10000] - -v, --verbosity - Verbosity level: higher is more verbose [default: 0] - -l, --logging-off - Run in non-logging mode (toggled at runtime by CTRL+L): do not write all terminal output to file in .terminal_logs directory - --reveal-ip - If set to false, as an indirect node, always use routers to connect to other nodes. - -d, --detached - Run in detached mode (don't accept input) - --rpc - Add a WebSockets RPC URL at boot - --password - Node password (in double quotes) - --max-log-size - Max size of all logs in bytes; setting to 0 -> no size limit (default 16MB) - --number-log-files - Number of logs to rotate (default 4) - --max-peers - Maximum number of peers to hold active connections with (default 32) - --max-passthroughs - Maximum number of passthroughs serve as a router (default 0) - --soft-ulimit - Enforce a static maximum number of file descriptors (default fetched from system) - --process-verbosity - ProcessId: verbosity JSON object [default: ] - -h, --help - Print help - -V, --version - Print version -``` - -A home directory must be supplied — where the node will store its files. -The `--rpc` flag is an optional `wss://` WebSocket link to an Ethereum RPC, allowing the Kinode to send and receive Ethereum transactions — used in the [identity system](../getting_started/kimap.md#kns-kinode-name-system) as mentioned [above](#creating-an-alchemy-account). -If this is not supplied, the node will use a set of default RPC providers served by other nodes on the network. -If the `--port` flag is supplied, Kinode will attempt to bind that port for serving HTTP and will exit if that port is already taken. -If no `--port` flag is supplied, Kinode will bind to `8080` if it is available, or the first port above `8080` if not. - -
OPTIONAL: Acquiring an RPC API Key - -### Acquiring an RPC API Key - -Create a new "app" on [Alchemy](https://dashboard.alchemy.com/apps) for Optimism Mainnet. - -![Alchemy Create App](../assets/alchemy-create-app.png) - -Copy the WebSocket API key from the API Key button: - -![Alchemy API Key](../assets/alchemy-api-key.png) - -#### Alternative to Alchemy - -As an alternative to using Alchemy's RPC API key, [Infura's](https://app.infura.io) endpoints may be used. Upon creating an Infura account, the first key is already created and titled 'My First Key'. Click on the title to edit the key. - -![Infura My First Key](../assets/my_first_key_infura.png) - -Next, check the box next to Optimism "MAINNET". After one is chosen, click "SAVE CHANGES". Then, at the top, click "Active Endpoints". - -![Create Endpoint Infura](../assets/create_endpoint_infura.png) - -On the "Active Endpoints" tab, there are tabs for "HTTPS" and "WebSockets". Select the WebSockets tab. Copy this endpoint and use it in place of the Alchemy endpoint in the following step, "Running the Binary". - -![Active Endpoints Infura](../assets/active_endpoints_infura.png) - -
- -### Running the Binary - -In a terminal window, run: - -```bash -./kinode path/to/home -``` - -where `path/to/home` is the directory where you want your new node's files to be placed, or, if booting an existing node, is that node's existing home directory. - -A new browser tab should open, but if not, look in the terminal for this line: - -``` -login or register at http://localhost:8080 -``` - -and open that `localhost` address in a web browser. - -## Registering an Identity - -Next, register an identity. - -![Register start](../assets/register-start.png) - -Click `Register .os Name`. -If you've already got a wallet, proceed to [Connecting the Wallet](#connecting-the-wallet). -Otherwise, you're going to need to [Acquire a Wallet](#aside-acquiring-a-wallet). - -### Aside: Acquiring a Wallet - -To register an identity, Kinode must send an Ethereum transaction, which requires ETH and a cryptocurrency wallet. -While many wallets will work, the examples below use Metamask. -Install Metamask [here](https://metamask.io/download/) if you don't already have it. - -### Connecting the Wallet - -After clicking `Register .os Name`, follow the prompts in the `Connect a Wallet` modal (if you haven't already connected a wallet): - -![Register connect wallet](../assets/register-connect-wallet.png) - -### Aside: Bridging ETH to Optimism - -Bridge ETH to Optimism using the [official bridge](https://app.optimism.io/bridge). -Many exchanges also allow sending ETH directly to Optimism wallets. - -### Setting Up Networking (Direct vs. Routed Nodes) - -When registering on Kinode, you may choose between running a direct or indirect (routed) node. -Most users should use an indirect node. -To do this, simply leave the box below name registration unchecked. - -![Register select name](../assets/register-select-name.png) - -An indirect node connects to the network through a router, which is a direct node that serves as an intermediary, passing packets from sender to receiver. -Routers make connecting to the network convenient, and so are the default. -If you are connecting from a laptop that isn't always on, or that changes WiFi networks, use an indirect node. - -A direct node connects directly, without intermediary, to other nodes (though they may, themselves, be using a router). -Direct nodes may have better performance, since they remove middlemen from connections. -Direct nodes also reduces the number of third parties that know about the connection between your node and your peer's node (if both you and your peer use direct nodes, there will be no third party involved). - -Use an indirect node unless you are familiar with running servers. -A direct node must be served from a static IP and port, since these are registered on the Ethereum network and are how other nodes will attempt to contact you. - -Regardless, all packets, passed directly or via a router, are end-to-end encrypted. -Only you and the recipient can read messages. - -As a direct node, your IP is published on the blockchain. -As an indirect node, only your router knows your IP. - -### Sending the Registration Transaction - -After clicking `Register .os name`, click through the wallet prompts to send the transaction: - -![Register confirm wallet](../assets/register-confirm-wallet.png) - -![Register metamask](../assets/register-metamask.png) - -You'll see your node name being pre-committed, and then will send another transaction to mint: - -![Register precommitting](../assets/register-precommitting.png) - -![Register mint](../assets/register-mint.png) - -### What Does the Password Do? - -Finally, you'll set your password. - -The password encrypts the node's networking key. -The networking key is how your node communicates securely with other nodes, and how other nodes can be certain that you are who you say you are. - -## Welcome to the Network - -After setting the node password, you will be greeted with the homepage. - -![Homepage](../assets/register-homepage.png) - -Try downloading, installing, and using some apps on the App Store. -Come ask for recommendations in the [Kinode Discord](https://discord.gg/mYDj74NkfP)! diff --git a/src/kit/boot-fake-node.md b/src/kit/boot-fake-node.md index bb93bb6d..1d6f3d47 100644 --- a/src/kit/boot-fake-node.md +++ b/src/kit/boot-fake-node.md @@ -80,80 +80,3 @@ Options: -h, --help Print help ``` - -### `--runtime-path` - -short: `-r` - -Pass to build a local Kinode core repo and use the resulting binary to boot a fake node, e.g. - -``` -kit boot-fake-node --runtime-path ~/git/kinode -``` - -for a system with the Kinode core repo living at `~/git/kinode`. - -Overrides `--version`. - -### `--version` - -short: `-v` - -Fetch and run a specific version of the binary; defaults to most recent version. -Overridden by `--runtime-path`. - -### `--port` - -short: `-p` - -Run the fake node on this port; defaults to `8080`. - -### `--home` - -short: `-o` - -Path to home directory for fake node; defaults to `/tmp/kinode-fake-node`. - -### `--fake-node-name` - -short: `-f` - -The name of the fake node; defaults to `fake.os`. - -### `--fakechain-port` - -Run the anvil chain on this port; defaults to `8545`. -Additional fake nodes must point to the same port to connect to the chain. - -### `--rpc` - -The Ethereum RPC endpoint to use, if desired. - -### `--persist` - -Persist the node home directory after exit, rather than cleaning it up. - -Example usage: - -``` bash -kit boot-fake-node --persist --home ./my-fake-node -``` - -After shutting down the node, to run it again: - -```bash -kit boot-fake-node --home ./my-fake-node -``` - -### `--password` - -The password of the fake node; defaults to "`secret`". - -### `--release` - -If `--runtime-path` is given, build the runtime for release; default is debug. -The tradeoffs between the release and default version are described [here](https://doc.rust-lang.org/book/ch01-03-hello-cargo.html?highlight=release#building-for-release). - -### `--verbosity` - -Set the verbosity of the node; higher is more verbose; default is `0`, max is `3`. diff --git a/src/kit/boot-real-node.md b/src/kit/boot-real-node.md index 36b3b77b..2fe1b4fd 100644 --- a/src/kit/boot-real-node.md +++ b/src/kit/boot-real-node.md @@ -48,50 +48,3 @@ Options: --verbosity Verbosity of node: higher is more verbose [default: 0] -h, --help Print help ``` - -### `--runtime-path` - -short: `-r` - -Pass to build a local Kinode core repo and use the resulting binary to boot a real node, e.g. - -``` -kit boot-real-node --runtime-path ~/git/kinode -``` - -for a system with the Kinode core repo living at `~/git/kinode`. - -Overrides `--version`. - -### `--version` - -short: `-v` - -Fetch and run a specific version of the binary; defaults to most recent version. -Overridden by `--runtime-path`. - -### `--port` - -short: `-p` - -Run the real node on this port; defaults to `8080`. - -### `--home` - -short: `-o` - -Required field. -Path to home directory for real node. - -### `--rpc` - -The Ethereum RPC endpoint to use, if desired. - -### `--release` - -If `--runtime-path` is given, build the runtime for release; default is debug. -The tradeoffs between the release and default version are described [here](https://doc.rust-lang.org/book/ch01-03-hello-cargo.html?highlight=release#building-for-release). - -### `--verbosity` - -Set the verbosity of the node; higher is more verbose; default is `0`, max is `3`. diff --git a/src/kit/build-start-package.md b/src/kit/build-start-package.md index eadce182..82ac6441 100644 --- a/src/kit/build-start-package.md +++ b/src/kit/build-start-package.md @@ -61,109 +61,3 @@ Options: -h, --help Print help ``` - -### Optional positional arg: `DIR` - -The package directory to build, install and start on the node; defaults to the current working directory. - -### `--port` - -short: `-p` - -The localhost port of the node; defaults to `8080`. -To interact with a remote node, see [here](../hosted-nodes.md#using-kit-with-your-hosted-node). - -### `--download-from` - -short: `-d` - -The mirror to download dependencies from (default: package `publisher`). - -### `--world` - -short: `-w` - -[WIT `world`](../system/process/wit_apis.md) to use. -Not required for Rust processes; use for py or js. - -### `--local-dependency` - -short: `-l` - -A path to a package that satisfies a build dependency. -Can be specified multiple times. - -### `--add-to-api` - -short: `-a` - -A path to a file to include in the API published alongside the package. -Can be specified multiple times. - -### `--no-ui` - -Do not build the web UI for the process. -Does nothing if passed with `--ui-only`. - -### `--ui-only` - -Build ONLY the UI for a package with a UI. -Otherwise, for a package with a UI, both the package and the UI will be built. - -### `--include` - -short: `-i` - -Only build these processes/UIs within the package. -Can be specified multiple times. - -If not specified, build all. - -### `--exclude` - -short: `-e` - -Do not build these processes/UIs within the package. -Can be specified multiple times. - -If not specified, build all. - -### `--skip-deps-check` - -short: `-s` - -Don't check for dependencies. - -### `--features` - -Build the package with the given [cargo features](https://doc.rust-lang.org/cargo/reference/features.html). - -Features can be used like shown [here](https://doc.rust-lang.org/cargo/reference/features.html#command-line-feature-options). -Currently the only feature supported system-wide is `simulation-mode`. - -### `--reproducible` - -short: `-r` - -Make a reproducible build with a deterministic hash. - -Rust does not produce reproducible builds unless: -1. The path of the source is the same. -2. Compiler versions match (e.g., `rustc`, `gcc`, `ld`). -3. `build.rs` is deterministic. - -`kit` allows reproducible builds by building the package inside a Docker image, which controls 1 and 2. - -The Docker image is published for `x86_64` Linux machines specifically, but also works on `x86_64` MacOS machines. - -### `--force` - -short: `-f` - -Don't check if package doesn't need to be rebuilt: just build it. - -### `--verbose` - -short: `-v` - -Always output stdout and stderr if set. diff --git a/src/kit/build.md b/src/kit/build.md index fdd6e138..1ab7dab9 100644 --- a/src/kit/build.md +++ b/src/kit/build.md @@ -101,111 +101,3 @@ Options: -h, --help Print help ``` - -### Optional positional arg: `DIR` - -The package directory to build; defaults to the current working directory. - -### `--no-ui` - -Do not build the web UI for the process. -Does nothing if passed with `--ui-only`. - -### `--ui-only` - -Build ONLY the UI for a package with a UI. -Otherwise, for a package with a UI, both the package and the UI will be built. - -### `--include` - -short: `-i` - -Only build these processes/UIs within the package. -Can be specified multiple times. - -If not specified, build all. - -### `--exclude` - -short: `-e` - -Do not build these processes/UIs within the package. -Can be specified multiple times. - -If not specified, build all. - -### `--skip-deps-check` - -short: `-s` - -Don't check for dependencies. - -### `--features` - -Build the package with the given [cargo features](https://doc.rust-lang.org/cargo/reference/features.html). - -Features can be used like shown [here](https://doc.rust-lang.org/cargo/reference/features.html#command-line-feature-options). -Currently the only feature supported system-wide is `simulation-mode`. - -### `--port` - -short: `-p` - -Node to pull dependencies from. -A package's dependencies can be satisfied by either: -1. A live node, the one running at the port given here, or -2. By local dependencies (specified using [`--local-dependency`](#--local-dependency), below). - -### `--download-from` - -short: `-d` - -The mirror to download dependencies from (default: package `publisher`). - -### `--world` - -short: `-w` - -[WIT `world`](../system/process/wit_apis.md) to use. -Not required for Rust processes; use for py or js. - -### `--local-dependency` - -short: `-l` - -A path to a package that satisfies a build dependency. -Can be specified multiple times. - -### `--add-to-api` - -short: `-a` - -A path to a file to include in the API published alongside the package. -Can be specified multiple times. - -### `--reproducible` - -short: `-r` - -Make a reproducible build with a deterministic hash. - -Rust does not produce reproducible builds unless: -1. The path of the source is the same. -2. Compiler versions match (e.g., `rustc`, `gcc`, `ld`). -3. `build.rs` is deterministic. - -`kit` allows reproducible builds by building the package inside a Docker image, which controls 1 and 2. - -The Docker image is published for `x86_64` Linux machines specifically, but also works on `x86_64` MacOS machines. - -### `--force` - -short: `-f` - -Don't check if package doesn't need to be rebuilt: just build it. - -### `--verbose` - -short: `-v` - -Always output stdout and stderr if set. diff --git a/src/kit/chain.md b/src/kit/chain.md index 8910b936..112ddfe8 100644 --- a/src/kit/chain.md +++ b/src/kit/chain.md @@ -33,20 +33,3 @@ Options: -v, --verbose If set, output stdout and stderr -h, --help Print help ``` - -### `--port` - -Port to run anvil fakechain on. -Defaults to `8545`. - -### `--version` - -Kinode binary version to run chain for. -Different Kinode versions have different `foundry` compatibility due to breaking changes in chain state formatting. -`kit` will prompt you to install the proper version of `foundry`. - -### `--verbose` - -short: `-v` - -Always output stdout and stderr if set. diff --git a/src/kit/connect.md b/src/kit/connect.md index 7c168151..dadae2e8 100644 --- a/src/kit/connect.md +++ b/src/kit/connect.md @@ -53,31 +53,3 @@ Options: -p, --port Remote (host) port Kinode is running on -h, --help Print help ``` - -### Optional positional arg: `LOCAL_PORT` - -The local port to bind for the SSH tunnel. -This is the port to direct `kit` commands to in order to have them routed to the hosted node. - -Defaults to `9090`. - -### `--disconnect` - -short: `-d` - -If set, disconnect the tunnel with given `LOCAL_PORT`. -Else, connect a new tunnel. - -### `--host` - -short: `-o` - -Connect tunnel to this host. -Required when connecting a new tunnel; not required when disconnecting. - -### `--port` - -short: `-p` - -The remote port to tunnel to. -If not given when creating a new tunnel, `kit` will first determine the remote port by creating a short-lived SSH connection to the remote host, then use that port. diff --git a/src/kit/dev-ui.md b/src/kit/dev-ui.md index 8c093e97..e25a475f 100644 --- a/src/kit/dev-ui.md +++ b/src/kit/dev-ui.md @@ -31,25 +31,3 @@ Options: -s, --skip-deps-check If set, do not check for dependencies -h, --help Print help ``` - -### Optional positional arg: `DIR` - -The UI-enabled package directory to serve; defaults to current working directory. - -### `--port` - -short: `-p` - -For nodes running on localhost, the port of the node; defaults to `8080`. -`--port` is overridden by `--url` if both are supplied. - -### `--release` - -Create a production build. -Defaults to dev build. - -### `--skip-deps-check` - -short: `-s` - -Don't check for dependencies. diff --git a/src/kit/inject-message.md b/src/kit/inject-message.md index 288f4dfa..56d918b8 100644 --- a/src/kit/inject-message.md +++ b/src/kit/inject-message.md @@ -40,43 +40,3 @@ Options: -l, --non-block If set, don't block on the full node response -h, --help Print help ``` - -### First positional arg: `PROCESS` - -The process to send the injected message to in the form of `::`. - -### Second positional arg: `BODY_JSON` - -The message body. - -### `--port` - -short: `-p` - -For nodes running on localhost, the port of the node; defaults to `8080`. -`--port` is overridden by `--url` if both are supplied. - -### `--node` - -short: `-n` - -Node to target (i.e. the node portion of the address). - -E.g., the following, sent to the port running `fake.os`, will be forwarded from `fake.os`'s HTTP server to `fake2@foo:foo:template.os`: - -``` bash -kit inject-message foo:foo:template.os '{"Send": {"target": "fake.os", "message": "wow, it works!"}}' --node fake2.os -``` - -### `--blob` - -short: `-b` - -Path to file to include as `lazy_load_blob`. - -### `--non-block` - -short: `-l` - -Don't block waiting for a Response from target process. -Instead, inject the message and immediately return. diff --git a/src/kit/install.md b/src/kit/install.md deleted file mode 100644 index b341e6f1..00000000 --- a/src/kit/install.md +++ /dev/null @@ -1,85 +0,0 @@ -# Install `kit` - -These documents describe some ways you can use these tools, but do not attempt to be completely exhaustive. -You are encouraged to make use of the `--help` flag, which can be used for the top-level `kit` command: - -``` -$ kit --help -Development toolkit for Kinode - -Usage: kit - -Commands: - boot-fake-node Boot a fake node for development [aliases: f] - boot-real-node Boot a real node [aliases: e] - build Build a Kinode package [aliases: b] - build-start-package Build and start a Kinode package [aliases: bs] - chain Start a local chain for development [aliases: c] - connect Connect (or disconnect) a ssh tunnel to a remote server - dev-ui Start the web UI development server with hot reloading (same as `cd ui && npm i && npm run dev`) [aliases: d] - inject-message Inject a message to a running Kinode [aliases: i] - new Create a Kinode template package [aliases: n] - publish Publish or update a package [aliases: p] - remove-package Remove a running package from a node [aliases: r] - reset-cache Reset kit cache (Kinode core binaries, logs, etc.) - run-tests Run Kinode tests [aliases: t] - setup Fetch & setup kit dependencies - start-package Start a built Kinode package [aliases: s] - update Fetch the most recent version of kit - view-api Fetch the list of APIs or a specific API [aliases: v] - help Print this message or the help of the given subcommand(s) - -Options: - -v, --version Print version - -h, --help Print help -``` - -or for any of the subcommands, e.g.: - -``` -kit new --help -``` - -The first chapter of the [My First Kinode App tutorial](../my_first_app/chapter_1.md) shows the `kit` tools in action. - -## Getting kit - -`kit` requires Rust. -To get `kit`, run - -```bash -# Install Rust if you don't have it. -curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh - -# Install `kit`. -cargo install --git https://github.com/kinode-dao/kit --locked -``` - -To update, run that same command or - -``` -kit update -``` - -You can find the source for `kit` at [https://github.com/kinode-dao/kit](https://github.com/kinode-dao/kit). - -You can find a video guide that walks through setting up `kit` [here](https://www.youtube.com/watch?v=N8B_s_cm61k). - -## Logging - -Logs are printed to the terminal and stored, by default, at `/tmp/kinode-kit-cache/logs/log.log`. -The default logging level is `info`. -Other valid logging levels are: `debug`, `warning` and `error`. - -These defaults can be changed by setting environment variables: - -Environment Variable | Description --------------------- | ----------- -`KIT_LOG_PATH` | Set log path (default `/tmp/kinode-kit-cache/logs/log.log`). -`RUST_LOG` | Set log level (default `info`). - -For example, in Bash: - -```bash -export RUST_LOG=info -``` diff --git a/src/kit/kit-dev-toolkit.md b/src/kit/kit-dev-toolkit.md deleted file mode 100644 index b21f2612..00000000 --- a/src/kit/kit-dev-toolkit.md +++ /dev/null @@ -1,22 +0,0 @@ -# kit - -[`kit`](https://github.com/kinode-dao/kit) is a CLI tool**kit** to make development on Kinode ergonomic. - -## Table of Contents - -- [Installation](../kit/install.md) -- [`kit boot-fake-node`](../kit/boot-fake-node.md) -- [`kit new`](../kit/new.md) -- [`kit build`](../kit/build.md) -- [`kit start-package`](../kit/start-package.md) -- [`publish`](../kit/publish.md) -- [`kit build-start-package`](../kit/build-start-package.md) -- [`kit remove-package`](../kit/remove-package.md) -- [`kit chain`](../kit/chain.md) -- [`kit dev-ui`](../kit/dev-ui.md) -- [`kit inject-message`](../kit/inject-message.md) -- [`kit run-tests`](../kit/run-tests.md) -- [`kit connect`](../kit/connect.md) -- [`kit reset-cache`](../kit/reset-cache.md) -- [`kit boot-real-node`](../kit/boot-real-node.md) -- [`kit view-api`](../kit/view-api.md) diff --git a/src/kit/new.md b/src/kit/new.md index 7ba86e67..c52e3ea2 100644 --- a/src/kit/new.md +++ b/src/kit/new.md @@ -64,42 +64,3 @@ Options: --ui If set, use the template with UI -h, --help Print help ``` - -### Positional arg: `DIR` - -Create the template package in this directory. -By default the package name is set to the name specified here, if not supplied by `--package`. - -### `--package` - -short: `-a` - -Name of the package; defaults to `DIR`. -Must be Kimap-safe: contain only a-z, 0-9, and `-`. - -### `--publisher` - -short: `-u` - -Name of the publisher; defaults to `template.os`. -Must be Kimap-safe (plus `.`): contain only a-z, 0-9, `-`, and `.`. - -### `--language` - -short: `-l` - -Template language; defaults to `rust`. -Currently supports `rust`. -Ask us in the [Discord](https://discord.gg/mYDj74NkfP) about `python`, and `javascript` templates. - -### `--template` - -short: `-t` - -Which template to create; defaults to `chat`. -Options are outlined in [Exists/Has UI-enabled version](./new.html#existshas-ui-enabled-version). - -### `--ui` - -Create the template with a UI. -Currently, only `rust` `chat` has UI support. diff --git a/src/kit/publish.md b/src/kit/publish.md index 975838d0..e1ac635c 100644 --- a/src/kit/publish.md +++ b/src/kit/publish.md @@ -53,80 +53,3 @@ Options: -h, --help Print help ``` - -### Positional arg: `DIR` - -Publish the metadata for the package in this directory. - -### `--metadata-uri` - -short: `-u` - -The URI hosting the `metadata.json`. -You must place the `metadata.json` somewhere public before publishing your package on Kimap. -A common place to host `metadata.json` is on your package's GitHub repo. -If you use GitHub, make sure to use the static link to the specific commit, not a branch-specific URL (e.g. `main`) that will change with new commits. -For example, `https://raw.githubusercontent.com/nick1udwig/chat/master/metadata.json` is not the correct link to use, because it will change when new commits are added. -You want to use a link like `https://raw.githubusercontent.com/nick1udwig/chat/191dce595ad00a956de04b9728f479dee04863c7/metadata.json` which will not change when new commits are added. - -### `--keystore-path` - -short: `-k` - -Use private key from keystore given by path. -The keystore is a [Web3 Secret Storage file](https://ethereum.org/en/developers/docs/data-structures-and-encoding/web3-secret-storage/) that holds an encrypted copy of your private keys. -See the [Sharing with the World](../my_first_app/chapter_5.md) usage example for one way to create a keystore. - -Must supply one and only one of `--keystore-path`, `--ledger`, or `--trezor`. - -### `--ledger` - -short: `-l` - -Use private key from Ledger. - -Must supply one and only one of `--keystore-path`, `--ledger`, or `--trezor`. - -### `--trezor` - -short: `-t` - -Use private key from Trezor. - -Must supply one and only one of `--keystore-path`, `--ledger`, or `--trezor`. - -### `--rpc` - -short: `-r` - -The Ethereum RPC endpoint to use. -For fakenodes this runs by default at `ws://localhost:8545`. - -### `--real` - -short: `-e` - -Manipulate the real (live) Kimap. -Default is to manipulate the fakenode Kimap. - -### `--unpublish` - -Remove a previously-published package. - -### `--gas-limit` - -short: `-g` - -Set the gas limit for the transaction. - -### `--priority-fee` - -short: `-p` - -Set the priority fee for the transaction. - -### `--fee-per-gas` - -short: `-f` - -Set the price of gas for the transaction. diff --git a/src/kit/remove-package.md b/src/kit/remove-package.md index d8000e09..4590b225 100644 --- a/src/kit/remove-package.md +++ b/src/kit/remove-package.md @@ -38,34 +38,3 @@ Options: -p, --port localhost node port; for remote see https://book.kinode.org/hosted-nodes.html#using-kit-with-your-hosted-node [default: 8080] -h, --help Print help ``` - -### Optional positional arg: `DIR` - -The package directory to be removed from the node; defaults to current working directory. - -### `--package` - -short: `-a` - -The package name of the package to be removed; default is derived from `metadata.json` in `DIR`. - -### `--publisher` - -short `-u` - -The publisher of the package to be removed; default is derived from `metadata.json` in `DIR`. - -### `--port` - -short: `-p` - -For nodes running on localhost, the port of the node; defaults to `8080`. -`--port` is overridden by `--url` if both are supplied. - -### `--url` - -short: `-u` - -The URL the node is hosted at. -Can be either localhost or remote. -`--url` overrides `--port` if both are supplied. diff --git a/src/kit/reset-cache.md b/src/kit/reset-cache.md deleted file mode 100644 index ed87df5a..00000000 --- a/src/kit/reset-cache.md +++ /dev/null @@ -1,21 +0,0 @@ -# `kit reset-cache` - -The `kit reset-cache` command clears the cache where `kit` stores Kinode core binaries, logs, etc. - -## Discussion - -In general, `kit reset-cache` should not need to be used. -There are occasionally cases where the `kit` cache gets corrupted. -If seeing confusing and difficult to explain behavior from `kit`, a `kit reset-cache` won't hurt. - -## Arguments - -``` -$ kit reset-cache --help -Reset kit cache (Kinode core binaries, logs, etc.) - -Usage: kit reset-cache - -Options: - -h, --help Print help -``` diff --git a/src/kit/run-tests.md b/src/kit/run-tests.md index 356f1e09..6d1d77cc 100644 --- a/src/kit/run-tests.md +++ b/src/kit/run-tests.md @@ -45,124 +45,3 @@ Arguments: Options: -h, --help Print help ``` - -### Optional positional arg: `PATH` - -Path to [`.toml`](https://toml.io/en/) file specifying tests to run; defaults to `tests.toml` in current working directory. - -## `tests.toml` - -The testing protocol is specified by a `.toml` file. -[`tests.toml`](https://github.com/kinode-dao/core_tests/blob/master/tests.toml), from [core tests](https://github.com/kinode-dao/core_tests), will be used as an example: -```toml -{{#webinclude https://raw.githubusercontent.com/kinode-dao/core_tests/master/tests.toml}} -``` - -The top-level of `tests.toml` consists of four fields: - -Key | Value Type -------------------------------------------------- | ---------- -[`runtime`](#runtime) | `{ FetchVersion = "" }` or `{ RepoPath = "~/path/to/repo" }` -[`runtime_build_release`](#runtime_build_release) | Boolean -[`persist_home`](#persist_home) | Boolean -[`tests`](#tests) | [Array of Tables](https://toml.io/en/v1.0.0#array-of-tables) - -### `runtime` - -Specify the runtime to use for the tests. -Two option variants are supported. -An option variant is specified with the key (e.g. `FetchVersion`) of a `toml` [Table](https://toml.io/en/v1.0.0#table) (e.g. `{FetchVersion = "0.7.2"}`). - -The first, and recommended is `FetchVersion`. -The value of the `FetchVersion` Table is the version number to fetch and use (or `"latest"`). -That version of the runtime binary will be fetched from remote if not found locally. - -The second is `RepoPath`. -The value of the `RepoPath` Table is the path to a local copy of the runtime repo. -Given a valid path, that repo will be compiled and used. - -For example: - -```toml -{{#webinclude https://raw.githubusercontent.com/kinode-dao/core_tests/master/tests.toml 1}} -``` - -### `runtime_build_release` - -If given `runtime = RepoPath`, `runtime_build_release` decides whether to build the runtime as `--release` or not. - -For example: - -```toml -{{#webinclude https://raw.githubusercontent.com/kinode-dao/core_tests/master/tests.toml 3}} -``` - -### `persist_home` - -Whether or not to persist the node home directories after tests have been run. -It is recommended to have this set to `false` except when debugging a test. - -### `tests` - -An Array of Tables. -Each Table specifies one test to run. -That test consists of: - -Key | Value Type | Value Description --------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------- -`dependency_package_paths` | Array of Strings (`PathBuf`s) | Paths to packages to load onto dependency node so that setup or test packages can fetch them to fulfil `dependencies` -`setup_packages` | Array of Tables [(`SetupPackage`s)](https://github.com/kinode-dao/kit/blob/10e2bd5d44cf44690c2360e60523ac5b06d1d5f0/src/run_tests/types.rs#L37-L40) | Each Table in the Array contains `path` (to the package) and `run` (whether or not to run the package or merely load it in) -`setup_scripts` | Array of Strings (`bash` line) | Each Table in the Array contains `path` (to the script) and `args` (to be passed to the script); these scripts will run alongside the test nodes -`test_package_paths` | Array of Strings (`PathBuf`s) | Paths to test packages to run -`test_scripts` | Array of Strings (`bash` line) | Each Table in the Array contains `path` (to the script) and `args` (to be passed to the script); these scripts will be run as tests and must return a `0` on success -`timeout_secs` | Integer > 0 | Timeout for this entire series of test packages -`fakechain_router` | Integer >= 0 | Port to be bound by anvil, where fakechain will be hosted -[`nodes`](#nodes) | Array of Tables | Each Table specifies configuration of one node to spin up for test - -Each test package is [a single-process package that accepts and responds with certain messages](#test-package-interface). - - -For example: -```toml -... -{{#webinclude https://raw.githubusercontent.com/kinode-dao/core_tests/master/tests.toml 7:16}} -... -``` - -#### `nodes` - -Each test specifies one or more nodes: fake nodes that the tests will be run on. -The first node is the "master" node that will orchestrate the test. -Each node is specified by a Table. -That Table consists of: - -Key | Value Type | Value Description -------------------- | -------------- | ----------------- -`port` | Integer > 0 | Port to run node on (must not be already bound) -`home` | Path | Where to place node's home directory -`fake_node_name` | String | Name of fake node -`password` | String or Null | Password of fake node (default: `"secret"`) -`rpc` | String or Null | [`wss://` URI of Ethereum RPC](../getting_started/login.md#starting-the-kinode-node) -`runtime_verbosity` | Integer >= 0 | The verbosity level to start the runtime with; higher is more verbose (default: `0`) - -For example: - -```toml -{{#webinclude https://raw.githubusercontent.com/kinode-dao/core_tests/master/tests.toml 15:25}} -``` - -## Test Package Interface - -A test package is a single-process package that accepts and responds with certain messages. -The interface is defined as: - - -```wit -{{#webinclude https://raw.githubusercontent.com/kinode-dao/kinode/main/kinode/packages/tester/api/tester%3Asys-v0.wit}} -``` - -A `run` `request` starts the test. -A `run` `response` marks the end of a test, and is either an `Ok` Result, indicating success, or a `Err` Result with information as to where the error occurred. - -In the Rust language, a helper macro for failures can be found in [`tester_lib.rs`](https://github.com/kinode-dao/kinode/blob/main/kinode/packages/tester/tester_lib.rs). -The macro is `fail!()`: it automatically sends the Response as specified above, filing out the fields, and exits. diff --git a/src/kit/start-package.md b/src/kit/start-package.md index f538fdb7..8f61873d 100644 --- a/src/kit/start-package.md +++ b/src/kit/start-package.md @@ -39,14 +39,3 @@ Options: -p, --port localhost node port; for remote see https://book.kinode.org/hosted-nodes.html#using-kit-with-your-hosted-node [default: 8080] -h, --help Print help ``` - -### Optional positional arg: `DIR` - -The package directory to install and start on the node; defaults to current working directory. - -### `--port` - -short: `-p` - -The localhost port of the node; defaults to `8080`. -To interact with a remote node, see [here](../hosted-nodes.md#using-kit-with-your-hosted-node). diff --git a/src/kit/view-api.md b/src/kit/view-api.md deleted file mode 100644 index e9733879..00000000 --- a/src/kit/view-api.md +++ /dev/null @@ -1,60 +0,0 @@ -# `kit view-api` - -short: `kit v` - -`kit view-api` fetches the list of APIs or a specific API for the given package. -`view-api` relies on a node to do so, e.g. - -``` -kit view-api --port 8080 -``` - -lists all the APIs of packages downloaded by the Kinode running at port 8080. - -## Example Usage - -```bash -# Fetch and display the API for the given package -kit view-api app-store:sys -``` - -## Discussion - -Packages have the option to [expose their API using a WIT file](../system/process/wit_apis.md). -When a package is distributed, its API is posted by the distributor along with the package itself. -Downloading the package also downloads the API. - -## Arguments - -``` -$ kit view-api --help -Fetch the list of APIs or a specific API - -Usage: kit view-api [OPTIONS] [PACKAGE_ID] - -Arguments: - [PACKAGE_ID] Get API of this package (default: list all APIs) - -Options: - -p, --port localhost node port; for remote see https://book.kinode.org/hosted-nodes.html#using-kit-with-your-hosted-node [default: 8080] - -d, --download-from Download API from this node if not found - -h, --help Print help -``` - -### Positional arg: `PACKAGE_ID` - -Get the API of this package. -By default, list the names of all APIs. - -### `--port` - -short: `-p` - -For nodes running on localhost, the port of the node; defaults to `8080`. -`--port` is overridden by `--url` if both are supplied. - -### `--download-from` - -short: `-d` - -The mirror to download dependencies from (default: package `publisher`). diff --git a/src/my_first_app/build_and_deploy_an_app.md b/src/my_first_app/build_and_deploy_an_app.md deleted file mode 100644 index d6bf8121..00000000 --- a/src/my_first_app/build_and_deploy_an_app.md +++ /dev/null @@ -1,9 +0,0 @@ -# My First Kinode Application - -In these tutorials, you'll setup your development environment and learn about the `kit` tools. -You'll learn about templates and also walk through writing an application from the ground up, backend and frontend. -And finally, you'll learn how to deploy applications through the Kinode App Store. - -For the purposes of this documentation, terminal commands are provided as-is for ease of copying except when the output of the command is also shown. -In that case, the command is prepended with a `$ ` to distinguish the command from the output. -The `$ ` should not be copied into the terminal. diff --git a/src/process_stdlib/overview.md b/src/process_stdlib/overview.md deleted file mode 100644 index 74b90c03..00000000 --- a/src/process_stdlib/overview.md +++ /dev/null @@ -1,25 +0,0 @@ -# `process_lib` Overview - -This page serves as an introduction to the [process standard library](https://github.com/kinode-dao/process_lib), which makes writing Rust apps on Kinode easy. -The full documentation can be found [here](https://docs.rs/kinode_process_lib), and the crate lives [here](https://crates.io/crates/kinode_process_lib). - -In your `Cargo.toml` file, use a version tag like this: -```toml -kinode_process_lib = "0.10.0" -``` - -**Make sure to use a recent version of the `process_lib` while the system is in beta and active development.** - -The major version of the `process_lib` will always match the major version of Kinode. -Since the current major version of both is 0, breaking changes can occur at any time. -Once the major version reaches 1, breaking changes will only occur between major versions. -As is, **developers may have to update their version of `process_lib` as they update Kinode.** - -Since Kinode apps use the [WebAssembly Component Model](https://component-model.bytecodealliance.org/), they are built on top of a [WIT](https://component-model.bytecodealliance.org/design/wit.html) (Wasm Interface Type) [package](https://github.com/kinode-dao/kinode-wit). -[`wit-bindgen`](https://github.com/bytecodealliance/wit-bindgen) is used to generate Rust code from a WIT file. -The generated code then contains the core types and functions that are available to all Kinode apps. - -However, the types themselves are unwieldy to use directly, and runtime modules present APIs that can be drastically simplified by using helper functions and types in the process standard library. - -Almost all code examples in this book make use of the `process_lib`. -For specific examples of its usage, check out the [docs](https://docs.rs/kinode_process_lib) or just follow the tutorials later in this book. diff --git a/src/system/databases.md b/src/system/databases.md deleted file mode 100644 index 687f24c1..00000000 --- a/src/system/databases.md +++ /dev/null @@ -1,14 +0,0 @@ -# Databases - -Kinode provides key-value databases via [RocksDB](https://rocksdb.org/), and relational databases via [SQLite](https://www.sqlite.org/docs.html). -Processes can create independent databases using wrappers over these libraries, and can read, write, and share these databases with other processes. -The APIs for doing are found here: [KV](../apis/kv.md) and [SQLite](../apis/sqlite.md). - -[Similarly to drives in the VFS](./files.md#drives), they are accessed by `package_id` and a `db` name (i.e. [`kv::open()`](https://docs.rs/kinode_process_lib/latest/kinode_process_lib/kv/fn.open.html) and [`sqlite::open()`](https://docs.rs/kinode_process_lib/latest/kinode_process_lib/sqlite/fn.open.html)). -Capabilities to read and write can be shared with other processes. - -All examples are using the [`kinode_process_lib`](../process_stdlib/overview.md) functions. - -## Usage - -For usage examples, see the [key-value API](../apis/kv.md) and the [SQlite API](../apis/sqlite.md). diff --git a/src/system/files.md b/src/system/files.md deleted file mode 100644 index 9c1fddaf..00000000 --- a/src/system/files.md +++ /dev/null @@ -1,66 +0,0 @@ -# Files - -## Virtual File System (VFS) - -The primary way to access files within your node is through the [VFS API](../apis/vfs.md). -The VFS API follows [`std::fs`](https://doc.rust-lang.org/std/fs/index.html) closely, while also adding some capabilities checks on paths. -Use the [`kinode_process_lib`](https://docs.rs/kinode_process_lib/latest/kinode_process_lib/vfs/index.html) to interact with the VFS. - -VFS files exist in the `vfs/` directory within your home node, and files are grouped by [`PackageId`](https://docs.rs/kinode_process_lib/latest/kinode_process_lib/kinode/process/standard/struct.PackageId.html). -For example, part of the VFS might look like: - -```text -node-home/vfs -├── app-store:sys -│   ├── pkg -│   │   ├── api -│   │   │   └── app-store:sys-v0.wit -│   │   ├── app-store.wasm -│   │   ├── manifest.json -│   │ ... -│   └── tmp -├── chess:sys -│   ├── pkg -│   │   ├── api -│   │   │   └── chess:sys-v0.wit -│   │   ├── chess.wasm -│   │   ├── manifest.json -│   │   └── ui -│   │ │ -│   │ ... -│   └── tmp -├── homepage:sys -│   ├── pkg -│   │   ├── api -│   │   │   └── homepage:sys-v0.wit -│   │   ├── homepage.wasm -│   │   ├── manifest.json -│   │   └── ui -│   │ │ -│   │ ... -│   └── tmp -... -``` - -## Drives - -A drive is a directory within a package's VFS directory, e.g., `app-store:sys/pkg/` or `your-package:publisher.os/my-drive/`. -Drives are owned by processes. -Processes can share access to drives they own via [capabilities](process/capabilities.md). -Each package is spawned with two drives: [`pkg/`](#pkg-drive) and [`tmp/`](#tmp-drive). -All processes in a package have caps to these default drives. -Processes can also create additional drives. -These new drives are permissioned at the process-level: other processes will need to be granted capabilities to read or write these drives. - -### `pkg/` drive - -The `pkg/` drive contains metadata about the package that Kinode requires to run that package, `.wasm` binaries, and optionally the API of the package and the UI. -When creating packages, the `pkg/` drive is populated by [`kit build`](../kit/build.md) and loaded into the Kinode using [`kit start-package`](../kit/start-package.md). - -### `tmp/` drive - -The `tmp/` drive can be written to directly by the owning package using standard filesystem functionality (i.e. `std::fs` in Rust) via WASI in addition to the Kinode VFS. - -## Usage - -For usage examples, see the [VFS API](../apis/vfs.md). diff --git a/src/system/http_server_and_client.md b/src/system/http_server_and_client.md deleted file mode 100644 index 46a8c5ec..00000000 --- a/src/system/http_server_and_client.md +++ /dev/null @@ -1,38 +0,0 @@ -# HTTP Server & Client - -No server or web services backend would be complete without an HTTP interface. -Kinode can both create and serve HTTP requests. -As a result, Kinode apps can read data from the web (and other Kinodes), and also serve both public and private websites and APIs. -The HTTP server is how most processes in Kinode present their interface to the user, through an authenticated web browser. - -The specification for the [server](../apis/http_server.md) and [client](../apis/http_client.md) APIs are available in the API reference. -These APIs are accessible via messaging the [`http-server:distro:sys`](https://github.com/kinode-dao/kinode/blob/main/kinode/src/http/server.rs) and [`http-client:distro:sys`](https://github.com/kinode-dao/kinode/blob/main/kinode/src/http/client.rs) runtime modules, respectively. -The only [`capability`](../system/process/capabilities.md) required to use either process is the one to message it, granted by the kernel. -It is recommended to interact with the `http-server` and `http-client` using the [`kinode_process_lib`](https://docs.rs/kinode_process_lib/latest/kinode_process_lib/http/index.html) - -WebSocket server/client functionality is presented alongside HTTP. - -At startup, the server either: - -1. Binds to the port given at the commandline, or -2. Searches for an open port (starting at 8080, if not, then 8081, etc.). - -The server then binds this port, listening for HTTP and WebSocket requests. - -You can find usage examples [here](../cookbook/talking_to_the_outside_world.md). -See also [`kit new`](../kit/new.md)s `chat` with GUI template which you can create using -``` -kit new my-chat --ui -``` - -## Private and Public Serving - -All server functionality can be either private (authenticated) or public. -If a given functionality is public, the Kinode serves HTTP openly to the world; if it is authenticated, you need your node's password so that your node can generate a cookie that grants you access. - -## Direct and Indirect Nodes - -Since direct nodes are expected to be accessible over IP, their HTTP server is likely to work if the bound port is accessible. -Note that direct nodes will need to do their own IP/DNS configuration, as Kinode doesn't provide any DNS management. - -Indirect nodes may not be accessible over IP, so their HTTP server may or may not function outside the local network. diff --git a/src/system/networking_protocol.md b/src/system/networking_protocol.md deleted file mode 100644 index cb347480..00000000 --- a/src/system/networking_protocol.md +++ /dev/null @@ -1,215 +0,0 @@ -# Networking Protocol - -### 1. Protocol Overview and Motivation - -The Kinode networking protocol is designed to be performant, reliable, private, and peer-to-peer, while still enabling access for nodes without a static public IP address. - -The networking protocol is NOT designed to be all-encompassing, that is, the only way that two Kinodes will ever communicate. -Many Kinode runtimes will provide userspace access to HTTP server/client capabilities, TCP sockets, and much more. -Some applications will choose to use such facilities to communicate. -This networking protocol is merely a common language that every Kinode is guaranteed to speak. -For this reason, it is the protocol on which system processes will communicate, and it will be a reasonable default for most applications. - -In order for nodes to attest to their identity without any central authority, all networking information is made available onchain. -Networking information can take two forms: direct or routed. -The former allows for completely direct peer-to-peer connections, and the latter allows nodes without a physical network configuration that permits direct connections to route messages through a peer. - -The networking protocol can and will be implemented in multiple underlying protocols. -Since the protocol is encrypted, a secure underlying connection with TLS or HTTPS is never necessary. -WebSockets are prioritized since to make purely in-browser Kinodes a possibility. -The other transport protocols with slots in the onchain identity data structure are: TCP, UDP, and WebTransport. - -Currently, only WebSockets and TCP are implemented in the runtime. -As part of the protocol, nodes identify the supported transport protocols of their counterparty and choose the optimal one to use. -Even nodes that do not share common transport protocols may communicate via routers. -Direct nodes must have at least one transport protocol in common. -It is strongly recommended that all nodes support WebSockets, including future browser-based nodes and mobile-nodes. - -### 2. Onchain Networking Information - -All nodes must publish an Ed25519 EdDSA networking public key onchain using the protocol registry contract. -A new key transaction may be posted at any time, but because agreement on networking keys is required to establish a connection and send messages between nodes, changes to onchain networking information will temporarily disrupt networking. -Therefore, all nodes must have robust access to the onchain PKI, meaning: multiple backup options and multiple pathways to read onchain data. -Because it may take time for a new networking key to proliferate to all nodes, (anywhere from seconds to days depending on chain indexing access) a node that changes its networking key should expect downtime immediately after doing so. - -Nodes that wish to make direct connections must post an IP and port onchain. -This is done by publishing `note` keys in [kimap](../getting_started/kimap.md). -In particular, the networking protocol expects the following pattern of data available: - -1. A `~net-key` note AND -2. Either: - a. A `~routers` note OR - b. An `~ip` note AND at least one of: - - `~tcp-port` note - - `~udp-port` note - - `~ws-port` note - - `~wt-port` note - -Nodes with onchain networking information (an IP address and at least one port) are referred to as **direct** nodes, and ones without are referred to as **indirect** or **routed** nodes. - -If a node is indirect, it must initiate a connection with at least one of its allowed routers in order to begin networking. -Until such a connection is successfully established, the indirect node is offline. -In practice, an indirect node that wants reliable access to the network should (1) have many routers listed onchain and (2) connect to as many of them as possible on startup. -In order to acquire such routers in practice, a node will likely need to provide some payment or service to them. - -### 3. Protocol Selection - -When one node seeks to send a message to another node, it first checks to see if it has an existing route to send it on. -If it does, that route is used. -If not, the node will use the information available about the other node to try and establish a route. - -If the target node is direct, the route may be direct, using one of the available transport methods. -If a direct node presents multiple ports using notes in kimap, the priority is currently: - -1. TCP -2. WS - -As more protocols are supported by various runtimes, this priority list will expand. - -Once a transport method is selected, if the connection fails, the target will be considered offline. -A node does not need to try every route available: if a direct node presents a port, it must service connections on that method to be considered online. - -If the target node is indirect, the route must be established through one of their routers. -As many routers as can be attempted within the message's timeout may be tried. -The selection of which routers to try in what order is implementation-specific. -When a router is being attempted, the transport method will be determined as in a standard direct connection, described above. -If a router is offline, the next router is tried. -If no routers are online, the indirect node will be considered offline. - -### 4. WebSockets Protocol - -This protocol does not make use of any [WebSocket frames](https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API/Writing_WebSocket_servers#exchanging_data_frames) other than Binary, Ping, and Pong. -Pings should be responded to with a Pong. -These are only used to keep the connection alive. -All content is sent as Binary frames. -Binary frames in the current protocol version (1) are limited to 10MB. This includes the full serialized `KernelMessage`. - -All data structures are serialized and deserialized using [MessagePack](https://msgpack.org/index.html). - -#### 4.1. Establishing a Connection - -The WebSockets protocol uses the [Noise Protocol Framework](http://www.noiseprotocol.org/noise.html) to encrypt all messages end-to-end. -The parameters used are `Noise_XX_25519_ChaChaPoly_BLAKE2s`. - -Using the XX pattern means following this interactive pattern: -``` - -> e - <- e, ee, s, es - -> s, se -``` - -The initiator is the node that is trying to establish a connection. - -**If the target is direct**, the intiator uses the IP and port provided onchain to establish a WebSocket connection. -If the connection fails, the target is considered offline. - -**If the target is indirect**, the initiator uses the IP and port of one of the target's routers to establish a WebSocket connection. -If a given router is unreachable, or fails to comport to the protocol, others should be tried until they are exhausted or too much time has passed (subject to the specific implementation). -If this process fails, the target is considered offline. - -**If the target is indirect**, before beginning the XX handshake pattern, the initiator sends a `RoutingRequest` to the target. - -```rust -pub struct RoutingRequest { - pub protocol_version: u8, - pub source: String, - pub signature: Vec, - pub target: String, -} -``` -The `protocol_version` is the current protocol version, which is 1. -The `source` is the initiator's node ID, as provided onchain. -The `signature` must be created by the initiator's networking public key. -The content is the routing target's node ID (i.e., the node which the initiator would like to establish an e2e encrypted connection with) concatenated with the router's node ID (i.e., the node which the initiator is sending the `RoutingRequest` to, which will serve as a router for the connection if it accepts). -The `target` is the routing target's node ID that must be signed above. - -Once a connection is established, the initiator sends an `e` message, containing an empty payload. - -The target responds with the `e, ee, s, es` pattern, including a `HandshakePayload` serialized with MessagePack. - -```rust -struct HandshakePayload { - pub protocol_version: u8, - pub name: String, - pub signature: Vec, - pub proxy_request: bool, -} -``` -The current `protocol_version` is 1. -The `name` is the name of the node, as provided onchain. -The `signature` must be created by the node's networking public key, visible onchain. -The content is the public key they will use to encrypt messages on this connection. -How often this key changes is implementation-specific but should be frequent. -The `proxy_request` is a boolean indicating whether the initiator is asking for routing service to another node. - -As the target, or receiver of the new connection, `proxy_request` will always be false. This field is only used by the initiator. - -Finally, the initiator responds with the `s, se` pattern, including a `HandshakePayload` of their own. - -After this pattern is complete, the connection switches to transport mode and can be used to send and receive messages. - -#### 4.2. Sending Messages - -Every message sent over the connection is a `KernelMessage`, serialized with MessagePack, then encrypted using the keys exchanged in the Noise protocol XX pattern, sent in a single Binary WebSockets message. - -```rust -struct KernelMessage { - pub id: u64, - pub source: Address, - pub target: Address, - pub rsvp: Rsvp, - pub message: Message, - pub lazy_load_blob: Option -} -``` - -See [`Address`](https://docs.rs/kinode_process_lib/latest/kinode_process_lib/kinode/process/standard/struct.Address.html), [`Rsvp`](https://github.com/kinode-dao/kinode/blob/5504f2a6c1b28eb5102aee9a56d2a278f1e5a2dd/lib/src/core.rs#L891-L894), [`Message`](https://docs.rs/kinode_process_lib/latest/kinode_process_lib/kinode/process/standard/enum.Message.html),and [`LazyLoadBlob`](https://docs.rs/kinode_process_lib/latest/kinode_process_lib/kinode/process/standard/struct.LazyLoadBlob.html) data types. - -#### 4.3. Receiving Messages - -When listening for messages, the protocol may ignore messages other than Binary, but should also respond to Ping messages with Pongs. - -When a Binary message is received, it should first be decrypted using the keys exchanged in the handshake exchange, then deserialized as a `KernelMessage`. -If this fails, the message should be ignored and the connection must be closed. - -Successfully decrypted and deserialized messages should have their `source` field checked for the correct node ID and then passed to the kernel. - -#### 4.4. Closing a Connection - -A connection can be intentionally closed by any party, at any time. -Other causes of connection closure are discussed in this section. - -All connection errors must result in closing a connection. - -Failure to send a message must be treated as a connection error. - -Failure to decrypt or deserialize a message must be treated as a connection error. - -If a `KernelMessage`'s source is not the node ID which the message recipient is expecting, it must be treated as a connection error. - -These behaviors are necessary since they indicate that the networking information of a counterparty may have changed and a new connection must be established using the new data onchain. - -Connections may be closed due to inactivity or load-balancing. This behavior is implementation-specific. - -### 5. TCP Protocol - -The TCP protocol is largely the same as the WebSockets protocol but without the use of Binary frames. `KernelMessage`s are instead streamed. -More documentation to come -- for now, read source here: -[https://github.com/kinode-dao/kinode/blob/main/kinode/src/net/tcp/utils.rs](https://gist.github.com/nick1udwig/d3d2d8ef588258162bdad1d1bbcabf43) - -### 6. Connection Maintenance and Errors - -The system's networking module seeks to abstract away the many complexities of p2p networking from app developers. -To this end, it reduces all networking issues to either Offline or Timeout. - -Messages do not have to expect a response. -If no response is expected, a networking-level offline or timeout error may still be thrown. -Local messages will only receive timeout errors if they expect a response. - -If a peer is direct, i.e. they have networking information published onchain, determining their offline status is simple: try to create a connection and send a message; if the underlying transport protocol experiences any errors while doing so, throw an 'offline' error. -If a message is not responded to before the timeout counter expires, it will throw a timeout. - -If a peer is indirect, i.e. they have routers, multiple attempts must be made before either an offline error is thrown. -The specific implementation of the protocol may vary in this regard (e.g. it may try to connect to all routers, or limit the number of attempts to a subset of routers). -As with direct peers, if a message is not responded to before the timeout counter expires, it will throw a timeout. - diff --git a/src/system/process/extensions.md b/src/system/process/extensions.md deleted file mode 100644 index 3f04e33e..00000000 --- a/src/system/process/extensions.md +++ /dev/null @@ -1,168 +0,0 @@ -# Extensions - -Extensions supplement and compliment Kinode processes. -Kinode processes have many features that make them good computational units, but they also have constraints. -Extensions remove the constraints (e.g., not all libraries can be built to Wasm) while maintaining the advantages (e.g., the integration with the Kinode Request/Response system). -The cost of extensions is that they are not as nicely bundled within the Kinode system: they must be run separately. - -## What is an Extension? - -Extensions are [WebSocket](https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API) clients that connect to a paired Kinode process to extend library, language, or hardware support. - -Kinode processes are [Wasm components](https://component-model.bytecodealliance.org/design/why-component-model.html), which leads to advantages and disadvantages. -The rest of the book (and in particular the [processes chapter](../../system/process/processes.md)) discusses the advantages (e.g., integration with the Kinode Request/Response system and the capabilities security model). -Two of the main disadvantages are: -1. Only certain libraries and languages can be used. -2. Hardware accelerators like GPUs are not easily accessible. - -Extensions solve both of these issues, since an extension runs natively. -Any language with any library supported by the bare metal host can be run as long as it can speak WebSockets. - -## Downsides of Extensions - -Extensions enable use cases that pure processes lack. -However, they come with a cost. -Processes are contained and managed by your Kinode, but extensions are not. -Extensions are independent servers that run alongside your Kinode. -They do not yet have a Kinode-native distribution channel. - -As such, extensions should only be used when absolutely necessary. -Processes are more stable, maintainable, and easily upgraded. -Only write an extension if there is no other choice. - -## How to Write an Extension? - -An extension is composed of two parts: a Kinode package and the extension itself. -They communicate with each other over a WebSocket connection that is managed by Kinode. -Look at the [Talking to the Outside World recipe](../../cookbook/talking_to_the_outside_world.md#websockets-server-with-reply-type) for an example. -The [examples below](#examples) show some more working extensions. - -### The WebSocket protocol - -The process [binds a WebSocket](#bind-an-extension-websocket), so Kinode acts as the WebSocket server. -The extension acts as a client, connecting to the WebSocket served by the Kinode process. - -The process sends [`HttpServerAction::WebSocketExtPushOutgoing`](https://docs.rs/kinode_process_lib/0.9.6/kinode_process_lib/http/server/enum.HttpServerAction.html#variant.WebSocketExtPushOutgoing) Requests to the `http-server`(look [here](../http_server_and_client.md) and [here](../..//apis/http_server.md)) to communicate with the extension (see the `enum` defined at the bottom of this section). - -Table 1: `HttpServerAction::WebSocketExtPushOutgoing` Inputs - -Field Name | Description --------------------- | ----------- -`channel_id` | Given in a WebSocket message after a client connects. -`message_type` | The WebSocketMessage type — recommended to be [`WsMessageType::Binary`](https://docs.rs/kinode_process_lib/latest/kinode_process_lib/http/server/enum.WsMessageType.html). -`desired_reply_type` | The Kinode `MessageType` type that the extension should return — `Request` or `Response`. - -The [`lazy_load_blob`](https://docs.rs/kinode_process_lib/latest/kinode_process_lib/kinode/process/standard/struct.LazyLoadBlob.html) is the payload for the WebSocket message. - -The `http-server` converts the Request into a `HttpServerAction::WebSocketExtPushData`, [MessagePack](https://msgpack.org)s it, and sends it to the extension. -Specifically, it attaches the Message's `id`, copies the `desired_reply_type` to the `kinode_message_type` field, and copies the `lazy_load_blob` to the `blob` field. - -The extension replies with a [MessagePack](https://msgpack.org)ed `HttpServerAction::WebSocketExtPushData`. -It should copy the `id` and `kinode_message_type` of the message it is serving into those same fields of the reply. -The `blob` is the payload. - -```rust -pub enum HttpServerAction { - //... - /// When sent, expects a `lazy_load_blob` containing the WebSocket message bytes to send. - /// Modifies the `lazy_load_blob` by placing into `WebSocketExtPushData` with id taken from - /// this `KernelMessage` and `kinode_message_type` set to `desired_reply_type`. - WebSocketExtPushOutgoing { - channel_id: u32, - message_type: WsMessageType, - desired_reply_type: MessageType, - }, - /// For communicating with the ext. - /// Kinode's http-server sends this to the ext after receiving `WebSocketExtPushOutgoing`. - /// Upon receiving reply with this type from ext, http-server parses, setting: - /// * id as given, - /// * message type as given (Request or Response), - /// * body as HttpServerRequest::WebSocketPush, - /// * blob as given. - WebSocketExtPushData { - id: u64, - kinode_message_type: MessageType, - blob: Vec, - }, - //... -} -``` - -### The Package - -The package is, minimally, a single process that serves as interface between Kinode and the extension. -Each extension must come with a corresponding Kinode package. - -Specifically, the interface process must: -1. Bind an extension WebSocket: this will be used to communicate with the extension. -2. Handle Kinode messages: e.g., Requests to be passed to the extension for processing. -3. Handle WebSocket messages: these will come from the extension. - -'Interface process' will be used interchangeably with 'package' throughout this page. - -#### Bind an Extension WebSocket - -The [`kinode_process_lib`](../../process_stdlib/overview.md) provides an easy way to bind an extension WebSocket: - -``` -kinode_process_lib::http::bind_ext_path("/")?; -``` - -which, for a process with process ID `process:package:publisher.os`, serves a WebSocket server for the extension to connect to at `ws://localhost:8080/process:package:publisher.os`. -Passing a different endpoint like `bind_ext_path("/foo")` will append to the WebSocket endpoint like `ws://localhost:8080/process:package:publisher.os/foo`. - -#### Handle Kinode Messages - -Like any Kinode process, the interface process must handle Kinode messages. -These are how other Kinode processes will make Requests that are served by the extension: -1. Process A sends Request. -2. Interface process receives Request, optionally does some logic, sends Request on to extension via WS. -3. Extension does computation, replies on WS. -4. Interface process receives Response, optionally does some logic, sends Response on to process A. - -The [WebSocket protocol section](#the-websocket-protocol) above discusses how to send messages to the extension over WebSockets. -Briefly, a `HttpServerAction::WebSocketExtPushOutgoing` Request is sent to the `http-server`, with the payload in the `lazy_load_blob`. - -It is recommended to use the following protocol: -1. Use the [`WsMessageType::Binary`](https://docs.rs/kinode_process_lib/latest/kinode_process_lib/http/server/enum.WsMessageType.html) WebSocket message type and use MessagePack to (de)serialize your messages. - [MessagePack](https://msgpack.org) is space-efficient and well supported by a variety of languages. - Structs, dictionaries, arrays, etc. can be (de)serialized in this way. - The extension must support MessagePack anyways, since the `HttpServerAction::WebSocketExtPushData` is (de)serialized using it. -2. Set `desired_reply_type` to `MessageType::Response` type. - Then the extension can indicate its reply is a Response, which will allow your Kinode process to properly route it back to the original requestor. -3. If possible, the original requestor should serialize the `lazy_load_blob`, and the type of `lazy_load_blob` should be defined accordingly. - Then, all the interface process needs to do is `inherit` the `lazy_load_blob` in its `http-server` Request. - This increases efficiency since it avoids bringing those bytes across the Wasm boundry between the process and the runtime (see more discussion [here](../process/processes.md#message-structure)). - -#### Handle WebSocket Messages - -At a minimum, the interface process must handle: - -Table 2: [`HttpServerRequest`](https://docs.rs/kinode_process_lib/latest/kinode_process_lib/http/server/enum.HttpServerRequest.html) Variants - -`HttpServerRequest` variant | Description ---------------------------- | ----------- -`WebSocketOpen` | Sent when an extension connects. Provides the `channel_id` of the WebSocket connection, needed to message the extension: store this! -`WebSocketClose` | Sent when the WebSocket closes. A good time to clean up the old `channel_id`, since it will no longer be used. -`WebSocketPush` | Used for sending payloads between interface and extension. - -Although the extension will send a `HttpServerAction::WebSocketExtPushData`, the `http-server` converts that into a `HttpServerRequest::WebSocketPush`. -The `lazy_load_blob` then contains the payload from the extension, which can either be processed in the interface or `inherit`ed and passed back to the original requestor process. - -### The Extension - -The extension is, minimally, a WebSocket client that connects to the Kinode interface process. -It can be written in any language and it is run natively on the host as a "side car" — a separate binary. - -The extension should first connect to the interface process. -The recommended pattern is to then iteratively accept and process messages from the WebSocket. -Messages come in as MessagePack'd `HttpServerAction::WebSocketExtPushData` and must be replied to in the same format. -The `blob` field is recommended to also be MessagePack'd. -The `id` and `kinode_message_type` should be mirrored by the extension: what it receives in those fields should be copied in its reply. - -## Examples - -Find some working examples of runtime extensions below: - -* [An untrusted Python code runner](https://github.com/nick1udwig/kinode-python) -* [A framework for evaluating ML models](https://github.com/nick1udwig/kinode-ml) diff --git a/src/system/processes_overview.md b/src/system/processes_overview.md deleted file mode 100644 index 75c5d0d4..00000000 --- a/src/system/processes_overview.md +++ /dev/null @@ -1,8 +0,0 @@ -# Processes - -Processes are independent pieces of Wasm code running on Kinode. -They can either be persistent, in which case they have in-memory state, or temporary, completing some specific task and returning. -They have access to long-term storage, like the filesystem or databases. -They can communicate locally and over the Kinode network. -They can access the internet via HTTP or WebSockets. -And these abilities can be controlled using a capabilities security model. diff --git a/src/system/read_and_write_to_chain.md b/src/system/read_and_write_to_chain.md deleted file mode 100644 index 2d441360..00000000 --- a/src/system/read_and_write_to_chain.md +++ /dev/null @@ -1,61 +0,0 @@ -# Read+Write to Chain - -Kinode comes with a built-in provider module for Ethereum and other EVM chains/rollups. -This runtime module lives in [`eth:distro:sys`](https://github.com/kinode-dao/kinode/tree/main/kinode/src/eth) and is usable by any package that acquires the messaging capability for it. -In addition to allowing read/write connections directly to WebSocket RPC endpoints, the provider module can also connect via the Kinode networking protocol to other Kinodes and use their provider modules as a relay to an RPC endpoint (or to another Kinode, forming a relay chain). -The node must be configured to allow relay connections, which can be done with a public/private flag or explicit allow/deny list. - -As with other runtime modules, processes should generally use the [`kinode_process_lib`](https://docs.rs/kinode_process_lib/latest/kinode_process_lib/eth/index.html) to interact with the RPC provider. -See [Reading Data from ETH](../cookbook/reading_data_from_eth.md) for an example of doing this in a process. -For more advanced or direct usage, such as configuring the provider module, see the [API Reference](../apis/eth_provider.md). - -### Supported Chains - -The provider module is capable of using any RPC endpoint that follows the [JSON-RPC API](https://ethereum.org/developers/docs/apis/json-rpc) that is used by Ethereum and most other EVM chains and rollups. -The runtime uses the [Alloy](https://github.com/alloy-rs) family of libraries to connect to WS RPC endpoints. -It does not currently support HTTP endpoints, as subscriptions are vastly preferable for many of the features that Kinode uses. - -### Configuration - -The [API Reference](../apis/eth_provider.md) demonstrates how to format requests to `eth:distro:sys` that adjust its config during runtime. -This includes adding and removing providers (whether other Kinodes or chain RPCs) and adjusting the permissions for other nodes to use this node as a relay. -However, most configuration is done in an optional file named `.eth-providers` inside the home folder of a node. -If this file is not present, a node will boot using the default providers hardcoded for testnet or mainnet, depending on where the node lives. -If it is present, the node will load in those providers and use them. -The file is a JSON object: a list of providers, with the following shape (example data): - -```json -[ - { - "chain_id": 1, - "trusted": false, - "provider": { - "RpcUrl": "wss://ethereum.publicnode.com" - } - }, - { - "chain_id": 11155111, - "trusted": false, - "provider": { - "Node": { - "use_as_provider": true, - "kns_update": { - "name": "default-router-1.os", - "owner": "", - "node": "0xb35eb347deb896bc3fb6132a07fca1601f83462385ed11e835c24c33ba4ef73d", - "public_key": "0xb1b1cf23c89f651aac3e5fd4decb04aa177ab0ec8ce5f1d3877b90bb6f5779db", - "ip": "123.456.789.101", - "port": 9000, - "routers": [] - } - } - } - } -] -``` - -One can see that the provider list includes both node-providers (other Kinodes that are permissioned for use as a relay) and url-providers (traditional RPC endpoints). -Nodes that wish to maximize their connectivity should supply themselves with url-providers, ideally trusted ones — they can even be running locally, with a light client for Ethereum such as [Helios](https://github.com/a16z/helios). -In fact, a future update to the provider module will likely integrate Helios, which will allow nodes to convert untrusted endpoints to trusted ones. This is the reason for the `trusted` flag in the provider object. - -Lastly, note that the `kns_update` object must fully match the onchain PKI data for the given node, otherwise the two nodes will likely not be able to establish a connection. diff --git a/src/system/system_components.md b/src/system/system_components.md deleted file mode 100644 index 909c7e63..00000000 --- a/src/system/system_components.md +++ /dev/null @@ -1,3 +0,0 @@ -# System Components - -This section describes the various components of the system, including the processes, networking protocol, public key infrastructure, HTTP server and client, files, databases, and terminal. \ No newline at end of file diff --git a/src/system/terminal.md b/src/system/terminal.md deleted file mode 100644 index eb1db945..00000000 --- a/src/system/terminal.md +++ /dev/null @@ -1,132 +0,0 @@ -# Terminal - -The [terminal syntax](https://github.com/kinode-dao/kinode?tab=readme-ov-file#terminal-syntax) is specified in the main Kinode repository. - -## Commands - -All commands in the [terminal](https://github.com/kinode-dao/kinode/tree/main/kinode/packages/terminal) are calling scripts — a special kind of process. -Kinode comes pre-loaded with a number of scripts useful for debugging and everyday use. -These scripts are fully named `