Book.tex 29 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445
  1. \documentclass{book}
  2. \usepackage{amsfonts,amssymb,amsmath,amsthm}
  3. \usepackage[scale=0.75]{geometry}
  4. \usepackage{hyperref}
  5. \begin{document}
  6. \tableofcontents
  7. \chapter{System Overview}
  8. % I should replace "consumer" with "user".
  9. The development of the internet was undoudedtly one of the greatest achievements in the 20th
  10. century, and the internet's killer app, the web, has reshaped our lives and the way we do
  11. business. But, for all the benefits we have received from these technologies there have
  12. corresponding costs. It's now possible to cheaply surveil entire popluations and discern their
  13. preferences and responses so that propaganda can be effectively created and distributed. The
  14. surveilence it not surepticious, it's quite overt. Consumers hand over this data willingly to
  15. tech companies because of the benefits they receive in return. But why should people be forced to
  16. choose between privacy and convenience?
  17. The cost of computing power, storage, and bandwidth are very cheap. A single board computer
  18. can provide more than enough computing power for the average web-browsing consumer. A classic rotating magnetic hard drive can hold terrabytes of data, more than enough to hold an individual's
  19. social media output. Umetered broadband internet access measured in hudreds of megabits per
  20. second is common, and becoming more so all the time. So with all these resources available,
  21. why is it that consumers do not more control over the computing infrastructure upon which
  22. they rely?
  23. The issue is that consumers' don't want to manage servers, networking, routing, and
  24. backup strategies. Each of these can be a full time job by itself. So in order for a consumer to
  25. be in control of their own infrastructure, the infrastructure needs to be able to take care of
  26. itself.
  27. Perhaps as important as consumer agency over their computing infrastructure, is the security of
  28. that infrastructure. Time and time again consumer trust has been violated by data breaches and
  29. service disruptions. These incidents show the need for a more secure and standardized system for
  30. storing application data and building web services. Such a system must provide both
  31. confidentiality of consumer data, and the ability to authenticate consumers without the
  32. need for insecure techniques, such as passwords.
  33. This document proposes a potential solution. It describes a system for organizing information into
  34. trees of blocks, the distribution of those blocks over a network of nodes, and a programming interface
  35. to access this information. Because no one piece of hardware
  36. is infallible, the system also includes mechanisms for nodes to contract with one another to store
  37. data. This allows data to be backed up and later restored in the case a node is lost. In order to
  38. ensure the free exchange of data amongst nodes, a digital currency is used to account for the
  39. available storage capacity of the network.
  40. The remainder of this chapter will give an overview of the system, with the remainder of the
  41. document going into specific details of each of the system's components.
  42. \section{Blocks}
  43. User data stored in this system is organized into structures called \emph{block trees}. Every block tree
  44. is identified with a public key. The private key that corresponds to a block tree's public key is
  45. required to control that tree. Any person who has the private key for a block tree is called that
  46. tree's owner.
  47. Computers participating in this system are called \emph{nodes}. Nodes are also identified by public keys, but
  48. these keys are not directly tied to block trees. Nodes that have access to a block trees data are said
  49. to be \emph{attached} to that block tree. Nodes can be attached to multiple block trees at once, or none
  50. at all.
  51. Block trees are of course trees of \emph{blocks}. Every block is identified by a string called a \emph{path},
  52. which describes its location in the tree. The root of this path is a hash (hex encoded)
  53. of the tree's public key, allowing blocks from any tree to be referred to. A block consists of three segments:
  54. a header, a payload and a signature.
  55. The payload is encrypted by a symmetric cipher using a randomly generated key. This randomly
  56. generated key is called the \emph{block's key}. To allow access to the payload, the block's key is encapsulated
  57. using other keys and the resulting cipher texts are stored in the block's header. These encapculated keys
  58. are referred to as read capabilities, or \emph{read caps} for short.
  59. The root node of every block tree contains a read cap for the block tree's public key.
  60. Every non-root block contains a read cap
  61. for the block's parent, which is to say the block's key is encapsulated using the its parent's block key.
  62. So when one has a read cap for a block, they can read the data in all blocks descended from that
  63. block. Because the owner of a block tree has a read cap for the root block, they can read all data
  64. stored in the tree. Other people (or nodes) can be given access to a subtree by granting them a read
  65. cap for the subtree's root. A block which contains public data is stored as cleartext with no read caps.
  66. While read caps provide for confidentiality, write caps provide for integrity. A \emph{write cap}
  67. for a block is a certificate chain which terminates at a certificate signed by the block tree's
  68. owner. Thus a self-signed certificate made using the tree's private key is a valid write cap
  69. for any block in the tree. By allowing a chain of certificates to be used, it's possible for
  70. the owner to give other people or nodes the ability to write data into their tree. The scope of
  71. this access is controlled by specifying the path under which writing is allowed to the certificiate.
  72. A write cap for a block is only valid if the path of the block is a contained in the path
  73. specified in every certificate in the chain.
  74. Both the header and the payload of a Block are protected using a private key signature. The writer
  75. of the block computes this signature using the private key which corresponds to the write cap
  76. for the block they're trying to write. In order to validate a block, this signature is validated, then
  77. the Write Cap is validated, and finally the hash of the public key of
  78. the last signer in the Write Cap chain is compared to the root of the Block's path. If these match,
  79. then the block is valid, as this means that an owner has given permission for the writer to write
  80. into their tree at this path.
  81. Accessing the data in a block requires several cryptographic operations, both for vaidation and
  82. for decryption. Because of this its important that blocks are relatively large, on the order of
  83. 4 MB, to amortize the cost of these operations.
  84. \section{Fragments}
  85. By itself this block structure would be useful for building a secure filesystem, but in order to
  86. be a durable storage system we need an efficient way of distributing data for redundancy and
  87. availability. This is the purpose of fragments.
  88. Blocks are distributed amongst nodes in the network using a fountain code. The output symbols
  89. of this code are referred to as \emph{fragments}. A code with a high performance implementation and good
  90. coding efficiency is an important design consideration for the system. For these reasons the
  91. RaptorQ code was chosen.
  92. In order to preserve the data in a newly created block, a node will need to distribute
  93. fragments to other nodes. It does this by advertising its desire to trade [currency]
  94. in its block tree for the storage of these fragments. \emph{[currency]} is a fungible
  95. token for the exchange of computing resources between nodes. Every block tree has
  96. some non-negative value for the amount of [currency] it controls. Nodes that are attached
  97. to a tree spend the tree's [currency] when paying other nodes for the storage of fragments.
  98. If another node is interested in making the exchange, it contacts the advertising node
  99. and both sign a contract. A \emph{contract} is a data structure signed by both nodes which
  100. states that hash of the fragment being stored and the amount of [currency] being exchanged
  101. for its storage. The contract is then stored in the public block tree (to be discussed below),
  102. so that [currency] can be transerfed between nodes and to create an accountability mechanism
  103. to prevent the storing node from acting in bad faith and deleting the fragment.
  104. When a node needs to retreive a block that was previously distributed in fragments, it connects to a
  105. subset of nodes containing the fragments and downloads enough to reconstruct
  106. the block. These downloads can be performed concurrently for greater speed. This same mechanism
  107. can be used to distribute public blocks to unaffiliated nodes. This mechanism facilitates load balancing
  108. and performance, as concurrent downloads
  109. spread the load over multiple nodes and are not limited by the bandwidth between any pair of nodes.
  110. The list of nodes containing the fragments of a block is called the block's \emph{node list}.
  111. A block's node list is stored in it's parent. This allows for any non-root block to be retreived.
  112. To allow the root block to be retrieved its node list is stored in the public block tree.
  113. \section{The Public Blocktree}
  114. \emph{The Public Block Tree} is a block tree which is known to all nodes. This is accomplished by
  115. providing all nodes with a hardcoded list of nodes that are attached to the public block tree.
  116. This is similar to the list of root DNS servers distributed with any networked operating system.
  117. Because the public block tree is only used for storing information that should be known to all
  118. nodes in the network, the payload of every block in it is cleartext. The public block tree serves
  119. only to facilitate the communication and exchange of data between nodes.
  120. One way that it does this is by containing a database of nodes and their IP addresses. A node
  121. which has a write cap to this database will only store an entry for a node if that node can provide
  122. a valid signed request. This signed request is stored in the database verbatim, so that other nodes can
  123. independently verify its validity. Thus the nodes in the network can use this database to securely resolve
  124. the IDs of other node's to their IP addresses.
  125. The other function of the public block tree is to contain a list of transactions and disputes.
  126. This list is referred to as the \emph{public log}.
  127. When a node is created, an event is logged detailing the amount of [currency] the node is worth.
  128. When a node is first attached to a block tree, this [currency] is then removed from the node
  129. and added to the block tree. When a node signs a contract with another node, it is stored in the
  130. log and [currency] is removed by the sending node's block tree and added to the receiving node's.
  131. In order to discourage nodes from receive payment for the storage of a fragment, then deleting
  132. the fragment to reclaim disk space, a reporting mechanism exists. If a node is unable to retrieve
  133. a fragment that it previously stored with another node, then it sends an event to the log
  134. indicating this. The other node can then respond by sending an event which contains the actual
  135. fragment which was requested. This allows all the nodes in the network to view the log and
  136. see if a node that they are considering signing a contract with is trustworthy. If they
  137. are not the defendant in any disputes, then they should be safe. If they are in one, but responded
  138. quickly with the fragment, then it could have been a transient network issue. If they never
  139. responded, then they are risky and should perhaps receive a lower payment for the storage
  140. of the fragment.
  141. Finally, the public block tree stores node lists for the root blocks of every block tree.
  142. This ensures that even if every node that participates in a block tree fails, the block
  143. tree can still be recovered from its fragments, provided its private key is known.
  144. \section{Nodes and the Network}
  145. Each node in the network has a public-private keypair. The string formed by hex encoding the
  146. hash of a node's public key is referred to as the \emph{node ID} of the node. When nodes
  147. are manufactured they are issued a certificate trusted by the
  148. public block tree. New nodes are claimed by issuing them a certificate and then writing
  149. that certificate into the public log. When a new node is claimed, currency is credited to
  150. the block tree which claimed it. This currency is to account for the storage capacity
  151. that the new node brings to that block tree. This mechanism is the reason why the node must have a
  152. certificate trusted by the public block tree, otherwise there would be no way to control the
  153. creation of currency.
  154. Nodes are identified by their node ID in the public block tree and in node lists. Nodes
  155. are responsible for updating their IP address in the public block tree whenever it changes.
  156. When a node is attached to a block tree it is issued a certificate containing a path
  157. under which its data will be stored. We say the the node is attached to the block tree at that path.
  158. The node which issues the node its certificate creates a read cap for it and stores it in the
  159. block where the node is attached.
  160. The data created by a node may optionally be replicated in its parent node. This would be suitable
  161. for a lightweight or mobile device which needs to ensure its data is replicated immediately and
  162. doesn't have time to negotiate contracts for the storage of fragments. For larger
  163. block trees, having non-replicating nodes is essential for scalability.
  164. More than one node can be attached at the same path, and when this happens a \emph{cluster} is formed.
  165. Each node in the cluster stores
  166. copies of the same data and they coordinate with each other to ensure the consitency of this
  167. data. This is accomplished by electing a leader. All writes to blocks under the attachment point are
  168. sent to the leader. The leader then serializes these writes and sends them to the rest of the
  169. nodes. By default writes to blocks use optimistic concurrency, with the last write known to the
  170. leader being the winner. But if a node requires exclusive access to a block it can
  171. make a request to the leader to lock it. Writes from nodes other than the locking node are rejected until
  172. the lock is released. The leader will release the lock on its own if no messages are received from
  173. the locking node after a timeout.
  174. If the attachment point is configured to be replicated to its parent, then the leader will maintain
  175. a connection to the the leader in the parent cluster. Note that the parent cluster need not be
  176. housed in the parent block, just at some ancestor block. Then, writes will be propagated through
  177. this connection to the parent cluster, where this process may continue if that cluster is also
  178. configured for replication. Distributed locking is similarly comunicated to the parent cluster,
  179. where the lock is only aquired with the parent's approval.
  180. \section{Programmatic Access to Data}
  181. No designer can hope to envsion all the potential applications that a person would want to have
  182. access to their data. That's why an important component of the system is the ability to run
  183. programs that can access data and provide services to other internet hosts, whether they are
  184. blocktree nodes or not. This is accomplished by providing a WebAssembly based execution
  185. environment with a system interface based on WASI. Information in the blocktree is available
  186. to programs using standard filesystem system calls that specify paths in a special directory.
  187. While some programs may wish to access blocks directly in this manner, others may wish to use
  188. an API at a higher level of abstraction. Thus there is also an API for creating arbitrarily sized
  189. files that will get mapped to fixed sized blocks, freeing the programmer from having to implment
  190. this themselves.
  191. Data provided by these filesystem APIs will be the most up-to-date versions known to the node.
  192. There's the possiblity that conflicts with other nodes may cause writes made by programs on the
  193. node to be rolled back or overwritten. Of course locks can be taken on files and blocks if a
  194. program must ensure exclusive access to data. Finally, an inotify-like API is provided for programs to be notified when changes to blocks occur.
  195. An important consideration for the design of this system was to facilitate the creation of web
  196. servers and other types of internet hosts which can serve data stored in a blocktree. For this
  197. reason there is a high level callback based API for declaring HTTP handlers, as well as handlers
  198. for blocktree specific messages.
  199. In order to provide the consumer with control over how their data is used a permissions system
  200. exists to control which blocks and APIs programs have access to. For instance a consumer would
  201. have to grant special permission for a program to be able to access the Berkeley sockets API.
  202. Programs are installed by specifing a blocktree path. This is a secure and convenient method of
  203. distribution as these programs can be downloaded from the nodes associated with the root of their
  204. path and the downloaded blocks can be cryptographically verified to be trusted by the root
  205. key. Authors wishing to distribute their programs in this manner will of course need to make the
  206. blocks containing them public (unencrypted), or else provide some mechanism for selective access.
  207. \chapter{Concepts}
  208. \section{Blocks}
  209. A block is a sequence of bytes and a sequence of events. The sequence of events define how the
  210. sequence of bytes came to be in its current state. At any time the sequence of bytes can
  211. be recreated by replaying the events starting with an empty sequence of bytes. Thus the sequence of
  212. events is considered the canonical form of of the block, with the sequence of bytes simply enabling
  213. efficient reads.
  214. Blocks are hierarchical, with every block having at most one parent and zero or more children.
  215. If a block has children then it is called a directory, and its children are called its directory
  216. entries. This hierarchical structure allows us to identify blocks using their position in the
  217. hierarchy.
  218. \section{Paths}
  219. A path is a globally unique identifier assigned to a block. The path of the block defines its
  220. position in the hierarchy of blocks. The syntax of a path is as follows:
  221. \begin{verbatim}
  222. COMP ::= '[\w-_\.]'+
  223. RelPath ::= COMP ('/' COMP)*
  224. AbsPath ::= '/' RelPath*
  225. \end{verbatim}
  226. In other words, a path is a sequence of components, represented texturally as `/' separated
  227. fields. The empty sequence of components is called the root path.
  228. The root path is a directory, and its entries are the blocks of the global blocktree and links
  229. to every private blocktree.
  230. A path to a private blocktree has only
  231. one component, consisting of the Hash of the private blocktree's root public signing key. Any path
  232. with only one components which consists of a fingerprint
  233. of the blocktree's root public key is a valid path to the blocktree. These paths are
  234. called the root paths of the private blocktree, and any path that begins with one of them
  235. is simply called a private path. Conversely, paths that do not begin with them are called public
  236. paths.
  237. If one path is a prefix of a second, then we say the first path contains the second and that the
  238. second is nested in the first.
  239. Thus every path is contained in the root path, and every private path is contained in the root path
  240. of a private blocktree.
  241. Note that if path one is equal to path two, then path one also contains path two, and vice-versa.
  242. In addition to identifying blocks, paths are used to scope capabilities and to address messages.
  243. \section{Principals}
  244. A principal is any entity which can be authenticated.
  245. All authentication in the blocktree system is performed using digital signatures and as such
  246. principals are identified by a cryptographic hash of their public signing key. Support
  247. for the following hash algorithms is required:
  248. \begin{enumerate}
  249. \item SHA2 256
  250. \item SHA2 512
  251. \end{enumerate}
  252. These are referred to as the Standard Hash Algorithms and a digest computed using one of them is
  253. referred to as a Standard Hash.
  254. When a principal is identified in a textural representation, such as in a path,
  255. the following syntax is used:
  256. \begin{verbatim}
  257. PrincipTxt ::= <hash algo index> '!' <Base64Url(hash)>
  258. \end{verbatim}
  259. where ``hash algo index'' is the base 10 string representation of the index of the hash algorithm in
  260. the above list, and ``Base64Url(hash)'' is the Base64Url encoding of the hash data identifying the
  261. principal. Such a textural representation is referred to as a fingerprint of the public key from
  262. which the hash was computed. In a slight abuse of language, we sometimes refer to the fingerprint or
  263. hash data as a principal, even though these data merely identify a principal.
  264. Principals can be issued capabilities which allow them to read and write to blocks.
  265. These capabilities are scoped by paths, which limit the set of blocks the capability can be used
  266. on. A capability can only be used on a block whose path is contained in the path of the
  267. capability. This access control mechanism is enforced cryptographically.
  268. A principal can grant a capability to another principal so long as the granted capability has a
  269. path which is contained in the capability possessed by the granting node. The specific mechanisms
  270. for how this is done differs depending on whether the capability is for reading or writing to a
  271. block.
  272. Every private blocktree is associated with a root principal. The path containing only one
  273. component which consists of the fingerprint of the principal's public key is a root path to the
  274. private block tree. The root principal has read and write capabilities for the private blocktree's
  275. root, and so can grant subordinate capabilities scoped to any block in the private blocktree.
  276. \section{Readcaps}
  277. In order to protect the confidentiality of the data stored in a block, a symmetric cypher is used.
  278. The key for this cipher is called block key, and by controlling access to it we can control which
  279. principals can read the block.
  280. The metadata of every block contains a dictionary of zero or more entries called read capabilities,
  281. or readcaps for short. A key in this dictionary is a hash of the public signing key of the principal
  282. for which the readcap was issued and the value is the encryption of the block key using the
  283. principal's public encryption key.
  284. The block key is also encrypted using the block key of the parent block, and the resulting cipher
  285. text is also stored in the block's metadata. This is referred to as the inherited readcap.
  286. Hence, a principal which has a read capability for a
  287. given path can read all paths contained in it as well. Further, a principal can use it's
  288. private encryption key to decrypt the block key, and then re-encrypt it using the public encryption
  289. key of another principal, thus allowing the principal to grant a subordinate readcap.
  290. A block that does not require confidentiality protection need not be encrypted. In this case, the
  291. table of readcaps is empty and the inherited readcap is set to a flag value to indicate the block
  292. is stored as plaintext. Blocks in the global blocktree are never encrypted.
  293. \section{Writecaps}
  294. A write capability, or writecap for short, is a certificate chain which extends from a node's
  295. credentials back to the root signing key of a private blocktree.
  296. Each certificate in this chain contains
  297. the public key of the principal who issued it and the path that it grants write capabilities to.
  298. The path of a certificate must be contained in the path of the previous certificate in the chain,
  299. and the chain must terminate in a certificate which is self-signed by the root signing credentials.
  300. Each certificate also contains an expiration time, represented by an Epoch value, and a writecap
  301. shall be considered invalid by a node if any certificate has an expiration time which it judges
  302. to be in the past.
  303. A block's metadata contains a writecap, which allows it to be verified by checking the digital
  304. signature on the metadata using the first certificate in the writecap, and then checking that the
  305. writecap is valid. This ensures that the chain of trust from the root signing credentials extends
  306. to the block's metadata. To extend this chain to the block's data, a Merkle Tree is used, as
  307. described below. A separate mechanism is used to ensure the integrity of blocks in the global
  308. block tree.
  309. \section{Sectors}
  310. A block's data is logically broken up into fixed sized sectors, and these are the units of
  311. confidentiality protection, integrity protection and consensus. When a process performs writes on
  312. an open block, those writes are buffered until a sector is filled (or until the process calls
  313. flush). Once this happens, the contents of the buffer are encrypted using the block key and then
  314. hashed. This hash, along with the offset at which the write occurred, is then used to update the
  315. Merkle Tree for the block. Once the new root of the Merkle Tree is computed, it is copied into the
  316. block's metadata. After ensuring it's writecap is copied into the metadata, the node then signs
  317. the metadata using its private signing key.
  318. When the block is later opened its metadata is verified using the process described in the previous
  319. section. Then the value of the root Merkle Tree node in the metadata is compared to the value in the
  320. Merkle Tree for the block. If it matches, then reads to any offset into the block can be verified
  321. using the Merkle Tree. Otherwise, the block is rejected as invalid.
  322. The sector size can be configured for each block individually at creation time,
  323. but cannot be changed afterwards.
  324. However, the same effect can be achieved by creating a new block with the desired sector
  325. size, copying the data from the old block into it, deleting the old block and renaming the new one.
  326. The choice of sector size for a given block represents a tradeoff between the amount of space the
  327. Merkle Tree occupies occupies and the latency experienced by the initial read to
  328. a sector. With a larger sector size, the Merkle Tree size is reduced, but more data has to be
  329. hashed and decrypted when each sector is read. Depending on the size of the block and its intended
  330. usage, the optimal choice will vary.
  331. It's important to note that the entire Merkle Tree for a block is held in memory for as long as the
  332. block is open. So, assuming the 32 byte SHA2 256 hash and the default 4 KiB sector size are used,
  333. if a 3 GiB file is opened then its MerkleTree will occupy approximately 64 MiB of memory, but will
  334. enable fast random access. Conversely, if 64 KiB sectors are used the Merkle Tree for the same
  335. 3 GiB block will occupy approximately 4 MiB, but it's random access performance may suffer from
  336. increased latency.
  337. \section{Nodes}
  338. The independent computing systems participating in the Blocktree system are called nodes. A node
  339. need not run on hardware distinct from other nodes, but it is logically distinct from all other
  340. nodes. In all contemporary operating systems a node can be implemented as a process.
  341. A node posses it own unique credentials, which consist of two public and private key pairs,
  342. one used in an encryption scheme and another in a signing scheme. A node is a Principal
  343. and a hash of its public signing key is used to identify it. In a slight abuse of language, this
  344. hash is referred to as the node's principal.
  345. Nodes are also identified by paths.
  346. A node is responsible for storing the blocks contained in the directory where it is
  347. attached, unless there is a second node whose parent directory is contained in the parent directory
  348. of the first and which contains the block's path.
  349. In other words, the node whose parent directory is closest to the block is responsible for storing
  350. the block.
  351. This allows data storage to scale as more nodes are added to a blocktree.
  352. When a directory contains more than one node a cluster is formed.
  353. The nodes in the cluster run the Raft consensus protocol in order to agree on the sequence
  354. of events which constitutes each of the blocks contained in the directory where they're attached.
  355. This allows for redundancy, load balancing, and increased performance, as different sectors
  356. of a block can be read from multiple nodes in the cluster concurrently.
  357. \section{Processes}
  358. Nodes run code and that running code is called a process. Processes are spawned by the node on which
  359. their running based on configuration stored in the node's parent directory.
  360. The code which they're spawned
  361. from is also stored in a block, though that block need not be contained in the directory of the
  362. node running it. So, for example, a software developer can publish code in their
  363. blocktree, and a user can run it by updating the configuring of a node with the path it was
  364. published to.
  365. Code which is stored in it's own directory with a manifest describing it is called an app.
  366. There are two types of apps, portable and native. A portable app is a collection of WebAssembly
  367. (Wasm) modules which are executed by the node in a special Wasm runtime which exposes a
  368. messaging API which can be used to perform block IO and communicate with other processes.
  369. Native apps on the other hand are container images containing binaries compiled to a specific
  370. target architecture. These containers can access blocktree messaging services via a native library
  371. using the C ABI, and they have access to blocks via a POSIX-compatible
  372. filesystem API. The software which runs the node itself is distributed using this mechanism.
  373. Regardless of their type, all apps require permissions to communicate with the outside world, with
  374. the default only allowing them to communicate with their parent.
  375. \chapter{Nodes}
  376. \chapter{The Network}
  377. \chapter{Application Programming Interface}
  378. \chapter{Example Applications}
  379. \end{document}