BlocktreeDce.tex 93 KB

1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889909192939495969798991001011021031041051061071081091101111121131141151161171181191201211221231241251261271281291301311321331341351361371381391401411421431441451461471481491501511521531541551561571581591601611621631641651661671681691701711721731741751761771781791801811821831841851861871881891901911921931941951961971981992002012022032042052062072082092102112122132142152162172182192202212222232242252262272282292302312322332342352362372382392402412422432442452462472482492502512522532542552562572582592602612622632642652662672682692702712722732742752762772782792802812822832842852862872882892902912922932942952962972982993003013023033043053063073083093103113123133143153163173183193203213223233243253263273283293303313323333343353363373383393403413423433443453463473483493503513523533543553563573583593603613623633643653663673683693703713723733743753763773783793803813823833843853863873883893903913923933943953963973983994004014024034044054064074084094104114124134144154164174184194204214224234244254264274284294304314324334344354364374384394404414424434444454464474484494504514524534544554564574584594604614624634644654664674684694704714724734744754764774784794804814824834844854864874884894904914924934944954964974984995005015025035045055065075085095105115125135145155165175185195205215225235245255265275285295305315325335345355365375385395405415425435445455465475485495505515525535545555565575585595605615625635645655665675685695705715725735745755765775785795805815825835845855865875885895905915925935945955965975985996006016026036046056066076086096106116126136146156166176186196206216226236246256266276286296306316326336346356366376386396406416426436446456466476486496506516526536546556566576586596606616626636646656666676686696706716726736746756766776786796806816826836846856866876886896906916926936946956966976986997007017027037047057067077087097107117127137147157167177187197207217227237247257267277287297307317327337347357367377387397407417427437447457467477487497507517527537547557567577587597607617627637647657667677687697707717727737747757767777787797807817827837847857867877887897907917927937947957967977987998008018028038048058068078088098108118128138148158168178188198208218228238248258268278288298308318328338348358368378388398408418428438448458468478488498508518528538548558568578588598608618628638648658668678688698708718728738748758768778788798808818828838848858868878888898908918928938948958968978988999009019029039049059069079089099109119129139149159169179189199209219229239249259269279289299309319329339349359369379389399409419429439449459469479489499509519529539549559569579589599609619629639649659669679689699709719729739749759769779789799809819829839849859869879889899909919929939949959969979989991000100110021003100410051006100710081009101010111012101310141015101610171018101910201021102210231024102510261027102810291030103110321033103410351036103710381039104010411042104310441045104610471048104910501051105210531054105510561057105810591060106110621063106410651066106710681069107010711072107310741075107610771078107910801081108210831084108510861087108810891090109110921093109410951096109710981099110011011102110311041105110611071108110911101111111211131114111511161117111811191120112111221123112411251126112711281129113011311132113311341135113611371138113911401141114211431144114511461147114811491150115111521153115411551156115711581159116011611162116311641165116611671168116911701171117211731174117511761177117811791180118111821183118411851186118711881189119011911192119311941195119611971198119912001201120212031204120512061207120812091210121112121213121412151216121712181219122012211222122312241225122612271228122912301231123212331234123512361237123812391240124112421243124412451246124712481249125012511252125312541255125612571258125912601261126212631264126512661267126812691270127112721273127412751276127712781279128012811282128312841285128612871288128912901291129212931294129512961297129812991300130113021303130413051306130713081309131013111312131313141315131613171318131913201321132213231324132513261327132813291330133113321333133413351336133713381339134013411342134313441345134613471348134913501351135213531354135513561357135813591360136113621363136413651366136713681369137013711372137313741375137613771378137913801381138213831384138513861387138813891390139113921393139413951396139713981399140014011402140314041405140614071408140914101411141214131414141514161417141814191420142114221423142414251426142714281429143014311432143314341435143614371438143914401441144214431444144514461447144814491450145114521453145414551456145714581459146014611462146314641465146614671468146914701471147214731474147514761477147814791480148114821483148414851486148714881489149014911492149314941495149614971498149915001501150215031504150515061507150815091510151115121513151415151516151715181519152015211522152315241525152615271528152915301531153215331534153515361537153815391540154115421543154415451546154715481549155015511552155315541555155615571558155915601561156215631564156515661567156815691570157115721573157415751576157715781579158015811582158315841585158615871588158915901591159215931594159515961597159815991600160116021603160416051606160716081609161016111612161316141615161616171618161916201621
  1. \documentclass{article}
  2. \usepackage[scale=0.8]{geometry}
  3. \usepackage{hyperref}
  4. \usepackage{graphicx}
  5. \title{Blocktree: A Distributed Computing Environment}
  6. \author{Matthew Carr}
  7. \begin{document}
  8. \maketitle
  9. \begin{abstract}
  10. This document is a proposal for a distributed computing environment called Blocktree.
  11. The system is designed around the actor model,
  12. and it uses actors to encapsulate resources and provide services.
  13. The platform is responsible for orchestrating these actors on a set of native operating system processes.
  14. The persistent state for the system is stored in a global distributed filesystem implemented using
  15. this actor runtime.
  16. High availability is achieved using the Raft consensus protocol to synchronize the state of files between processes.
  17. All data stored in the filesystem is secured with strong integrity and optional confidentiality protections.
  18. Well-known cryptographic constructions are used to provide this protection,
  19. the system does not attempt to innovate in terms of cryptography.
  20. A network block device interface allows for fast low-level read and write access to file sectors,
  21. with full support for client-side encryption.
  22. The system's trust model allows for mutual TLS authentication between all processes,
  23. without the need to trust a third-party certificate authority.
  24. By integrating these ideas into a single platform,
  25. the system aims to advance the status quo in the security and reliability of software systems.
  26. \end{abstract}
  27. \section{Introduction}
  28. % The "Big" Picture.
  29. Blocktree is an attempt to extend the Unix philosophy that everything is a file
  30. to the entire distributed system that comprises modern IT infrastructure.
  31. The system is organized around a global distributed filesystem which defines security
  32. principals, resources, and their authorization attributes.
  33. This filesystem provides a language for access control that can be used to securely grant
  34. access to resources, even those owned by different organizations.
  35. The system provides an actor runtime for orchestrating services.
  36. Resources are represented as actors
  37. and actors are executed by runtimes in different operating system processes.
  38. Each process has its own credentials which authenticate it as a unique security principal,
  39. and which specify the filesystem path where it is located.
  40. A process has authorization attributes which determine the set of processes that it may communicate
  41. with.
  42. TLS authentication is used to secure connections between processes.
  43. Messages addressed to actors in a different process are forwarded over these connections,
  44. while messages delivered to actors in the same process are delivered with zero-copying.
  45. % Self-certifying paths and the chain of trust.
  46. The single global Blocktree filesystem is partitioned into disjoint domains of authority.
  47. Each domain is controlled by a root principal.
  48. As is the case for all principals,
  49. a root principal is authenticated by a public-private signing key pair
  50. and is identified by the base64url encoded hash of its public signing key.
  51. The domain of authority for a given absolute path is determined by its first component,
  52. which is the identifier of the root principal that controls the domain.
  53. Because there's no meaning to the directory "/",
  54. a directory consisting of only a single component equal to a root principal's identifier is
  55. referred to as the root directory of the domain.
  56. The root principal delegates its authority to write files to subordinate principals by issuing
  57. them certificates which specify the path that the authority of the subordinate is limited to.
  58. File data is signed for authenticity and a certificate chain is contained in its metadata.
  59. This certificate chain must lead back to the root principal
  60. and consist of certificates with correctly scoped authority in order for the file be valid.
  61. Given the path of a file and the file's contents,
  62. this allows the file to be validated by anyone without the need to trust a third-party.
  63. Blocktree paths are called self-certifying for this reason.
  64. % Persistent state provided by the filesystem.
  65. One of the major challenges in distributed systems is managing persistent state.
  66. Blocktree solves this issue with its distributed filesystem.
  67. Files are broken into segments called sectors.
  68. The sector size of a file can be configured when it's created,
  69. but can't be changed later.
  70. Reads and writes of individual sectors are guaranteed to be atomic.
  71. The sectors which comprise a file and its metadata are replicated by a set of processes running
  72. the sector service.
  73. These service providers are responsible for storing the sectors of files that are contained in the
  74. directory containing the runtime in which it's running.
  75. The actors providing the sector service in a given directory coordinate with one another using
  76. the Raft protocol to synchronize the state of the sectors they store.
  77. By partitioning the data in the filesystem based on directory,
  78. the system can scale beyond the capabilities of a single consensus cluster.
  79. Sectors can be integrity protected and verified without reading the entire file,
  80. because each file has a Merkle tree of sector hashes associated with it.
  81. Encryption can be optionally applied to sectors,
  82. and when it is key is managed by the system.
  83. The cryptographic mechanisms used to implement these protections are described in section 3.
  84. % Protocol contracts.
  85. One of the design goals of Blocktree is to facilitate the creation of composable distributed
  86. systems.
  87. A major challenge to building such systems is the difficulty in pinning down bugs when they
  88. inevitably occur.
  89. Research into session types (a.k.a. Behavioral Types) promises to bring the safety benefits
  90. of type checking to actor communication.
  91. Blocktree integrates a session typing system that allows protocol contracts to be defined that
  92. specify the communication patterns of a set of actors.
  93. This model allows the state space of the set of actors participating in a computation to be defined,
  94. and the state transitions which occur to be specified based on the types of received messages.
  95. These contracts are used to verify protocol adherence statically and dynamically.
  96. This system is implemented using compile time code generation,
  97. making it a zero-cost abstraction.
  98. This frees the developer from dealing with the numerous failure modes that can occur in a
  99. communication protocol.
  100. % Implementation language and project links.
  101. Blocktree is implemented in the Rust programming language.
  102. It is currently only tested on Linux.
  103. Running it on other Unix-like operating systems should be straight-forward,
  104. though FUSE support is required to mount the filesystem.
  105. Its source code is licensed under the Affero GNU Public License Version 3.
  106. It can be downloaded at the project homepage at \url{https://blocktree.systems}.
  107. Anyone interested in contributing to development is welcome to submit a pull request
  108. to \url{https://gogs.delease.com/Delease/Blocktree}.
  109. If you have larger changes or architectural suggestions,
  110. please submit an issue for discussion prior to spending time implementing your idea.
  111. % Outline of the rest of the paper.
  112. The remainder of this paper is structured as follows:
  113. \begin{itemize}
  114. \item Section 2 describes the actor runtime, service and task orchestration, and service
  115. discovery.
  116. \item Section 3 discusses the filesystem, its concurrency semantics and implementation.
  117. \item Section 4 details the cryptographic mechanisms used to secure communication between
  118. actor runtimes and to protect sector data.
  119. \item Section 5 is a set of examples describing ways that Blocktree can be used to build systems.
  120. \item Section 6 provides some concluding remarks.
  121. \end{itemize}
  122. \section{Actor Runtime}
  123. % Motivation for using the actor model.
  124. Building scalable fault tolerant systems requires us to distribute computation over
  125. multiple computers.
  126. Rather than switching to a different programming model when an application scales beyond the
  127. capacity of a single computer,
  128. it is beneficial in terms of programmer time and program simplicity to begin with a model that
  129. enables multi-computer scalability.
  130. Fundamentally, all communication over an IP network involves the exchange of messages,
  131. namely IP packets.
  132. So if we wish to build scalable fault-tolerant systems,
  133. it makes sense to choose a programming model built on message passing,
  134. as this will ensure low impedance with the underlying networking technology.
  135. % Overview of message passing interface.
  136. That is why Blocktree is built on the actor model
  137. and why its actor runtime is at the core of its architecture.
  138. The runtime can be used to spawn actors, register services, dispatch messages immediately,
  139. and schedule messages to be delivered in the future.
  140. Messages can be dispatched in two different ways: with \texttt{send} and \texttt{call}.
  141. A message is dispatched with the \texttt{send} method when no reply is required,
  142. and with \texttt{call} when exactly one is.
  143. The \texttt{Future} returned by \texttt{call} can be awaited to obtain the reply.
  144. If a timeout occurs while waiting for the reply,
  145. the \texttt{Future} completes with an error.
  146. The name \texttt{call} was chosen to bring to mind a remote procedure call,
  147. which is the primary use case this method was intended for.
  148. Awaiting replies to messages serves as a simple way to synchronize a distributed computation.
  149. % Scheduling messages for future delivery.
  150. Executing actions at some point in the future or at regular intervals are common tasks in computer
  151. systems.
  152. Blocktree facilitates this by allows messages to be scheduled for future delivery.
  153. The schedule may specify a one time delivery at a specific instant in time,
  154. or a repeating delivery with a given period.
  155. These scheduling modes can be combined so that you can specify an anchoring instant
  156. and a period whose multiples will be added to this instant to calculate each delivery time.
  157. For example, a message could be scheduled for delivery every morning at 3 AM.
  158. Messages scheduled in a runtime are persisted in the runtime's file.
  159. This ensures scheduled messages will be delivered even if the runtime is restarted.
  160. If a message has been delivered
  161. and the schedule is such that it will never be delivered again,
  162. it is removed from the runtime's file.
  163. If a message is scheduled for delivery at a single instant in time,
  164. and that delivery is missed,
  165. the message will be delivered as soon as possible.
  166. But, if a message is periodic,
  167. any messages which were missed due to a runtime not being active will never be sent.
  168. This is because the runtime only persists the message's schedule,
  169. not every delivery.
  170. This mechanism is intended for periodic tasks or delaying work to a later time.
  171. It is not for building hard realtime systems.
  172. % Description of virtual actor system.
  173. One of the challenges when building actor systems is supervising and managing actors' lifecycles.
  174. This is handled in Erlang through the use of supervision trees,
  175. but Blocktree takes a different approach inspired by Microsoft's Orleans framework.
  176. Orleans introduced the concept of virtual actors,
  177. which are purely logical entities that exist perpetually.
  178. In Orleans, one does not need to spawn actors nor worry about respawning them should they crash,
  179. the framework takes care of spawning an actor when a message is dispatched to it.
  180. This model also gives the framework the flexibility to deactivate actors when they are idle
  181. and to load balance actors across different computers.
  182. In Blocktree a similar system is used when messages are dispatched to services.
  183. The Blocktree runtime takes care of routing these messages to the appropriate actors,
  184. spawning them if needed.
  185. A service must be registered in a runtime before messages can be routed to it.
  186. The actors which are spawned based on this registration are called \emph{service providers} of the
  187. service.
  188. Services which directly use operating system resource,
  189. such as those that listen on network sockets,
  190. are often started immediately after registration so that they are available to external clients.
  191. % Message addressing modes.
  192. Messages can be addressed to services or specific actors.
  193. When addressing a specific actor,
  194. the message contains an \emph{actor name},
  195. which is a pair consisting of the path of the runtime hosting the actor and the \texttt{Uuid}
  196. identifying the specific actor in that runtime.
  197. When addressing a service,
  198. the message is dispatched using a \emph{service name},
  199. which contains the following fields:
  200. \begin{enumerate}
  201. \item \texttt{service}: The path identifying the receiving service.
  202. \item \texttt{scope}: A filesystem path used to specify the intended recipient.
  203. \item \texttt{rootward}: A boolean describing whether message delivery is attempted towards or
  204. away from the root of the filesystem tree. A value of
  205. \texttt{false} indicates that the message is intended for a runtime directly contained in the
  206. scope. A value of \texttt{true} indicates that the message is intended for a runtime contained
  207. in a parent directory of the scope and should be delivered to a runtime which has the requested
  208. service registered and is closest to the scope.
  209. \item \texttt{id}: An identifier for a specific service provider.
  210. \end{enumerate}
  211. The ID can be a \texttt{Uuid} or a \texttt{String}.
  212. It is treated as an opaque identifier by the runtime,
  213. but a service is free to associate additional meaning to it.
  214. Every message has a header containing the name of the sender and receiver.
  215. The receiver name can be an actor or service name,
  216. but the receiver name is always an actor name.
  217. For example, to open a file in the filesystem,
  218. a message is dispatched with \texttt{call} using the service name of the filesystem service.
  219. The reply contains the name of the file actor spawned by the filesystem service which owns the opened
  220. file.
  221. Messages are then dispatched to the file actor using its actor name to read and write to the file.
  222. % The runtime is implemented using tokio.
  223. The actor runtime is currently implemented using the Rust asynchronous runtime tokio.
  224. Actors are spawned as tasks in the tokio runtime,
  225. and multi-producer single consumer channels are used for message delivery.
  226. Because actors are just tasks,
  227. they can do anything a task can do,
  228. including awaiting other \texttt{Future}s.
  229. Because of this, there is no need for the actor runtime to support short-lived worker tasks,
  230. as any such use-case can be accomplished by awaiting a set of \texttt{Future}s.
  231. This allows the runtime to focus on providing support for services.
  232. Using tokio also means that we have access to a high performance multi-threaded runtime with
  233. evented IO.
  234. This asynchronous programming model ensures that resources are efficiently utilized,
  235. and is ideal for a system focused on orchestrating services which may be used by many clients.
  236. % Delivering messages over the network.
  237. Messages can be forwarded between actor runtimes using a secure transport layer called
  238. \texttt{bttp}.
  239. The transport is implemented using the QUIC protocol, which integrates TLS for security.
  240. A \texttt{bttp} client may connect anonymously or using credentials.
  241. If an anonymous connection is attempted,
  242. the client has no authorization attributes associated with it.
  243. Only runtimes which grant others the execute permission allow connections from such clients.
  244. If these permissions are not granted in the runtime's file,
  245. anonymous connections are rejected.
  246. When a client connects with credentials,
  247. mutual TLS authentication is performed as part of the connection handshake,
  248. which cryptographically verifies the credentials of each runtime.
  249. These credentials contain the filesystem paths where each runtime is located.
  250. This information is used to securely route messages between runtimes.
  251. The \texttt{bttp} server is always authenticated during the handshake,
  252. even when the client is connecting anonymously.
  253. Because QUIC supports the concurrent use of many different streams,
  254. it serves as an ideal transport for a message oriented system.
  255. \texttt{bttp} uses different streams for independent messages,
  256. ensuring that head of line blocking does not occur.
  257. Note that although data from separate streams can arrive in any order,
  258. the protocol does provide reliable in-order delivery of data in any given stream.
  259. The same stream is used for sending the reply to a message dispatched with \texttt{call}.
  260. Once a connection is established,
  261. messages may flow both directions (provided both runtimes have execute permissions for the other),
  262. regardless of which runtime is acting as the client or the server.
  263. % Delivering messages locally.
  264. When a message is sent between actors in the same runtime it is delivered into the queue of the recipient without any copying,
  265. while ensuring immutability (i.e. move semantics).
  266. This is possible thanks to the Rust ownership system,
  267. because the message sender gives ownership to the runtime when it dispatches the message,
  268. and the runtime gives ownership to the recipient when it delivers the message.
  269. % Security model based on filesystem permissions.
  270. A runtime is represented in the filesystem as a file.
  271. This file contains the authorization attributes which are associated with the runtime's security
  272. principal.
  273. The credentials used by the runtime specify the file, so other runtimes are able to locate it.
  274. The metadata of the file contains authorization attributes just like any other file
  275. (e.g. UID, GID, and mode bits).
  276. In order for a principal to be able to send a message to an actor in the runtime,
  277. it must have execute permissions for this file.
  278. Thus communication between runtimes can be controlled using simple filesystem permissions.
  279. Permissions checking is done during the \texttt{bttp} handshake.
  280. Note that it is possible for messages to be sent in one direction in a \texttt{bttp} connection
  281. but not in the other.
  282. In this situation replies are permitted but unsolicited messages are not.
  283. An important trade-off which was made when designing this model was that messages which are
  284. sent between actors in the same runtime are not subject to any authorization checks.
  285. This was done for two reasons: performance and security.
  286. By eliminating authorization checks messages can be more efficiently delivered between actors in the
  287. same process,
  288. which helps to reduce the performance penalty of the actor runtime over directly using threads.
  289. Security is enhanced by this decision because it forces the user to separate actors with different
  290. security requirements into different operating system processes,
  291. which ensures all of the process isolation machinery in the operating system will be used to
  292. isolate them.
  293. % Representing resources as actors.
  294. As in other actor systems, it is convenient to represent resources in Blocktree using actors.
  295. This allows the same security model used to control communication between actors to be used for
  296. controlling access to resources,
  297. and for resources to be shared by many actors.
  298. For instance, a Point-to-Point Protocol connection could be owned by an actor.
  299. This actor could forward traffic delivered to it in messages over this connection.
  300. The set of actors which are able to access the connection is controlled by setting the filesystem
  301. permissions on the file for the runtime executing the actor owning the connection.
  302. % Actor ownership.
  303. The concept of ownership in programming languages is very useful for ensuring that resources are
  304. properly freed when the type using them dies.
  305. Because actors are used for encapsulating resources in Blocktree,
  306. a similar system of ownership is employed for this reason.
  307. An actor is initially owned by the actor that spawned it.
  308. An actor can only have a single owner,
  309. but the owner can grant ownership to another actor.
  310. An actor is not allowed to own itself,
  311. though it may be owned by the runtime.
  312. When the owner of an actor returns,
  313. the actor is sent a message instructing it to return.
  314. If it does not return after a timeout,
  315. it is interrupted.
  316. This is the opposite of how supervision trees work in Erlang.
  317. Instead of the parent receiving a message when the child returns,
  318. the child receives a message when the parent returns.
  319. Service providers spawned by the runtime are owned by it.
  320. They continue running until the runtime chooses to reclaim their resources,
  321. which can happen because they are idle or the runtime is overloaded.
  322. Note that ownership is not limited to a single runtime,
  323. so distributed resources can be managed by owning actors in many different runtimes.
  324. % Message routing to services.
  325. A service is identified by a Blocktree path.
  326. Only one service implementation can be registered in a particular runtime,
  327. though this implementation may be used to spawn many actors as providers for the service,
  328. each associated with a different ID.
  329. The runtime spawns a new actor when it finds no service provider associated with the ID in the
  330. message it is delivering.
  331. Some services may only have one service provider in a given runtime,
  332. as is the case for the sector and filesystem services.
  333. The \texttt{scope} and \texttt{rootward} field in an actor name specify the set of runtimes to
  334. which a message may be delivered.
  335. They allow the sender to express their intended recipient,
  336. while still affording enough flexibility to the runtime to route messages as needed.
  337. If \texttt{rootward} is \texttt{false},
  338. the message is delivered to a service provider in a runtime that is directly contained in
  339. \texttt{scope}.
  340. If \texttt{rootward} is \texttt{true},
  341. the parent directories of scope are searched,
  342. working towards the root of the filesystem tree,
  343. and the message is delivered to the first provider of \texttt{service} which is found.
  344. When there are multiple service providers to which a given message could be delivered,
  345. the one to which it is actually delivered is unspecified,
  346. which allows the runtime to balance load.
  347. Delivery will occur to at most one recipient,
  348. even in the case that there are multiple potential recipients.
  349. In order to contact other runtimes and deliver messages to them,
  350. their network endpoint (IP address and UDP port) needs to be known.
  351. This is achieved by maintaining a file with a runtime's endpoint address in the same directory as
  352. the runtime.
  353. The runtime is granted write permissions on the file,
  354. and it is updated by \texttt{bttp} when it begins listening on a new endpoint.
  355. The port a \texttt{bttp} server uses to listen for unicast connections is uniformly
  356. randomly selected from the set of ports in the dynamic range (49152-65535) which are unused on the
  357. server's host.
  358. Use of a random port allows many different \texttt{bttp} servers to share a single IP address
  359. and makes Blocktree more resistent to censorship.
  360. The services which are allowed to be registered in a given runtime are specified in the runtime's
  361. file.
  362. The runtime reads this list and uses it to deny service registrations for unauthorized services.
  363. The list is also read by other runtime's when they're searching for service providers.
  364. % The sector and filesystem service.
  365. The filesystem is itself implemented as a service.
  366. A filesystem service provider can be passed messages to delete files, list directory contents,
  367. open files, or perform several other standard filesystem operations.
  368. When a file is opened,
  369. a new actor is spawned which owns the newly created file handle and its name is returned to the
  370. caller in a reply.
  371. Subsequent read and write messages are sent to this actor.
  372. The filesystem service does not persist any data itself,
  373. its job is to function as an integration layer,
  374. conglomerating sector data from many different sources into a single unified interface.
  375. The sector service is what is ultimately responsible for storing data,
  376. and thus maintaining the persistent state of the system.
  377. It stores sector data in the local filesystem of each computer on which it is registered.
  378. The details of how this is accomplished are deferred to the next section.
  379. % Runtime queries.
  380. While it is possible to resolve runtime paths to network endpoints when the filesystem is available,
  381. another mechanism is needed to allow the filesystem service providers to be discovered.
  382. This is accomplished by allowing runtimes to query one another to learn of other runtimes.
  383. Because queries are intended to facilitate message delivery,
  384. the query fields and their meanings mirror those used for addressing messages:
  385. \begin{enumerate}
  386. \item \texttt{service} The path of the service whose providers are sought.
  387. Only runtimes with this service registered will be returned.
  388. \item \texttt{scope} The filesystem path relative to which the query will be processed.
  389. \item \texttt{rootward} Indicates if the query should search for runtimes from \texttt{scope}
  390. toward the root.
  391. \end{enumerate}
  392. The semantics of \texttt{scope} and \texttt{rootward} in a query are identical to their use in an
  393. actor name.
  394. As long as at least one other runtime is known,
  395. a query can be issued to learn of more runtimes.
  396. A runtime which receives a query may not be able to answer it directly.
  397. If it cannot,
  398. it returns the endpoint of the next runtime to which the query should be sent.
  399. % Bootstrap discovery methods.
  400. In order to bootstrap the discovery processes,
  401. another mechanism is needed to find the first peer to query.
  402. There were several possibilities explored for doing this.
  403. One way is to use a blockchain to store the endpoints of the runtimes hosting the filesystem service
  404. in the root directory.
  405. As long as these runtimes can be located,
  406. then all others can be found using the filesystem.
  407. This idea may be worth revisiting in the future,
  408. but the author wanted to avoid the complexity of implementing a new proof of work blockchain.
  409. Instead, two independent mechanisms are used,
  410. one that can discover runtimes over the internet as long as their path is known,
  411. and another that can discover runtimes on the local network even when the discoverer does not know
  412. their paths.
  413. % Searching DNS for root principals.
  414. When the path to a runtime is known,
  415. DNS is used to resolve SRV records using a fully qualified domain name
  416. (FQDN) derived from the path's root principal identifier.
  417. The SRV records are resolved using the name \texttt{\_bttp.\_udp.<FQDN>},
  418. where \texttt{<FQDN>} is the FQDN derived from the root principal's identifier.
  419. One SRV record may be created for each of the filesystem service providers in the root
  420. directory.
  421. Each record contains the UDP port and hostname where a runtime is listening.
  422. Every runtime is configured with a search domain that is used as a suffix in the FQDN.
  423. The leading labels in the FQDN are computed by base32 encoding the binary representation of the
  424. root principal's identifier.
  425. If the encoded string is longer than 63 bytes (the limit for each label in a hostname),
  426. it is separated into the fewest number of labels possible,
  427. working from left to right along the string.
  428. A dot followed by the search domain is concatenated onto the end of this string to form the FQDN.
  429. This method has the advantages of being simple to implement
  430. and allowing runtimes to discover each other over the internet.
  431. Implementing this system would be facilitated by hosting DNS servers in actors in the same
  432. runtimes as the root sector service providers.
  433. Then, records could be dynamically created which point to these runtimes.
  434. These runtimes would also need to be configured with static IP addresses,
  435. and the NS records for the search domain would need to point to them.
  436. Of course it is also possible to build such a system without hosting DNS inside of Blocktree.
  437. The downside of using DNS is that it couples Blocktree with a centralized,
  438. albeit distributed, system.
  439. % Using link-local multicast datagrams to find runtimes.
  440. Because the previous mechanism requires knowledge of the root principal of a domain to perform
  441. discovery,
  442. it will not work if a runtime is first starting up with no credentials and so does not know its
  443. own root principal.
  444. This runtime needs a way to discover other runtimes so it can connect to the filesystem and sector
  445. services.
  446. This issue is solved by using link-local multicast addressing to discover the runtimes on the same
  447. network as the discoverer.
  448. When a \texttt{bttp} server starts listening for unicast traffic,
  449. it also listens for UDP datagrams on port 50142 at addresses 224.0.0.142 and FE02::142,
  450. if the IPv4 or IPv6 networking stack is available, respectively.
  451. If the host is attached to a dual-stack network,
  452. the server listens on both addresses.
  453. When a runtime is attempting to discover other runtimes,
  454. it sends out datagrams to these endpoints.
  455. Each \texttt{bttp} server replies with its unicast address and filesystem path
  456. (as specified in its credentials).
  457. If the server is available at both IPv4 and IPv6 unicast addresses,
  458. it is at the server's discretion which address to respond with,
  459. it may even respond with an IPv4 to an IPv4 datagram,
  460. and IPv6 address to an IPv6 datagram.
  461. Once a client has discovered the \texttt{bttp} servers on its network,
  462. it can route messages to them,
  463. such as the provisioning requests which are used to obtain new credentials.
  464. Provisioning is described in the Cryptography section.
  465. Note that port 50142 is in the dynamic range,
  466. so it does not need to registered with the Internet Assigned Names and Numbers Authority (IANA).
  467. Both addresses 224.0.0.142 and FE02::142 are currently unassigned.
  468. but they will need to be registered with IANA if Blocktree is widely adopted.
  469. % Security model for queries.
  470. To allow runtimes which are not permitted to execute the root directory to query for other runtimes,
  471. authorization logic which is specific to queries is needed.
  472. If a process is connected with credentials
  473. and the path in the credentials contains the scope of the query,
  474. the query is permitted.
  475. If a process is connected anonymously,
  476. its query will only be answered if the query scope
  477. and all of its parent directories,
  478. grant others the execute permission.
  479. Queries from authenticated processes can be authorized using only the information in the query,
  480. but anonymous queries require knowledge of filesystem permissions,
  481. some of which may not be known to the answering runtime.
  482. When authorizing an anonymous query,
  483. an answering runtime should check that that the execute permission is granted on all directories
  484. that it is responsible for storing.
  485. If all these checks pass, it should forward the querier to the next runtime as usual.
  486. % Overview of protocol contracts and runtime checking of protocol adherence.
  487. To facilitate the creation of composable systems,
  488. a protocol contract checking system based on session types has been designed.
  489. This system models a communication protocol as a directed graph representing state transitions
  490. based on types of received messages.
  491. The protocol author defines the states that the actors participating in the protocol can be in using
  492. Rust traits.
  493. These traits define handler methods for each message type the actor is expected to handle in that
  494. state.
  495. A top-level trait which represents the entire protocol is defined that contains the types of the
  496. initial state of every actor in the protocol.
  497. A macro is used to generate the message handling loop for the each of the parties to the protocol,
  498. as well as enums to represent all possible states that the parties can be in and the messages that
  499. they exchange.
  500. The generated code is responsible for ensuring that errors are generated when a message of an
  501. unexpected type is received,
  502. eliminating the need for ad-hoc error handling code to be written by application developers.
  503. % Example of a protocol contract.
  504. Let's explore how this system can be used to build a simple pub-sub communications protocol.
  505. In this protocol,
  506. there will be a server which handles \texttt{Sub} messages by remembering the names of the actors
  507. who sent them.
  508. It will handle \texttt{Pub} messages by forwarding them to all of the subscribed actors.
  509. The state-transition graph for the system is shown in figure \ref{fig:pubsub}.
  510. \begin{figure}
  511. \begin{center}
  512. \includegraphics[scale=0.6]{PubSubStateGraph.pdf}
  513. \end{center}
  514. \caption{The state-transition graph for a simple pub-sub protocol.}
  515. \label{fig:pubsub}
  516. \end{figure}
  517. The solid edges in the graph indicate state transitions and are labeled with the message type
  518. which triggered the transition.
  519. The dashed edges indicate message delivery and are labeled with the type of the message delivered.
  520. Although \texttt{Runtime} is not the state of any actor in the system,
  521. it is included in the graph as the sender of the \texttt{Activate} and \texttt{Pub} messages.
  522. \texttt{Activate} is delivered by the runtime to pass a reference to the runtime and provide the
  523. actor's \texttt{Uuid}.
  524. \texttt{Pub} messages are dispatched by actors outside the graph and are routed to actors in the
  525. \texttt{Listening} state by the runtime.
  526. Note that the runtime itself doesn't have any notion of the state of any actor,
  527. it just delivers messaging using the rules described previously.
  528. Only an actor can tell whether a message is expected or not given its current state.
  529. Each of the actor states are modeled by Rust traits.
  530. \begin{verbatim}
  531. pub struct ClientInit {
  532. type AfterActivate: Subed;
  533. type Fut: Future<Output = Result<Self::AfterActivate>>;
  534. fn handle_activate(self, msg: Activate) -> Self::Fut;
  535. }
  536. pub struct Subed {
  537. type AfterPub: Subed;
  538. type Fut: Future<Output = Result<Self::AfterPub>>;
  539. fn handle_pub(self, msg: Envelope<Pub>) -> Self::Fut;
  540. }
  541. pub struct ServerInit {
  542. type AfterActivate: Listening;
  543. type Fut: Future<Output = Result<Self::AfterActivate>>;
  544. fn handle_activate(self, msg: Activate) -> Self::Fut;
  545. }
  546. pub struct Listening {
  547. type AfterSub: Listening;
  548. type SubFut: Future<Output = Result<Self::AfterSub>>;
  549. fn handle_sub(self, msg: Envelope<Sub>) -> Self::SubFut;
  550. type AfterPub: Listening;
  551. type PubFut: Future<Output = Result<Self::AfterPub>>;
  552. fn handle_pub(self, msg: Envelope<Pub>) -> Self::PubFut;
  553. }
  554. \end{verbatim}
  555. The definition of \texttt{Activate} is as follows:
  556. \begin{verbatim}
  557. pub struct Activate {
  558. rt: &'static Runtime,
  559. act_id: Uuid,
  560. }
  561. \end{verbatim}
  562. The \texttt{Envelope} type is a wrapper around a message which contains information about who sent
  563. it and a method that can be used to send a reply.
  564. In general a new actor state, represented by a new type, can be returned by a messaging handling
  565. method.
  566. The protocol itself is also represented by a trait:
  567. \begin{verbatim}
  568. pub trait PubSubProtocol {
  569. type Server: ServerInit;
  570. type Client: ClientInit;
  571. }
  572. \end{verbatim}
  573. By modeling this protocol independently of any implementation of it,
  574. we allow for many different interoperable implementations to be created.
  575. We can also isolate bugs in these implementations because unexpected or malformed messages are
  576. checked for by the generated code.
  577. % Implementing actors in languages other than Rust.
  578. Today the actor runtime only supports executing actors implemented in Rust.
  579. A WebAssembly (Wasm) plugin system is planned to allow any language which can compile to Wasm to be
  580. used to implement an actor.
  581. This work is blocked pending the standardization of the WebAssembly Component Model,
  582. which promises to provide an interface definition language which will allow type safe actors to be
  583. defined in many different languages.
  584. % Running containers using actors.
  585. Blocktree allows containers to be run by encapsulating them using a supervising actor.
  586. This actor is responsible for starting the container and managing the container's kernel namespace.
  587. Logically, it owns any kernel resources created by the container, including all spawned operating
  588. system processes.
  589. When the actor halts,
  590. all of these resources are destroyed.
  591. All network communication to the container is controlled by the supervising actor.
  592. The supervisor can be configured to bind container ports to host ports,
  593. as is commonly done today,
  594. but it can also be used to encapsulate traffic to and from the container in Blocktree messages.
  595. These messages are routed to other actors based on the configuration of the supervisor.
  596. This essentially creates a VPN for containers,
  597. ensuring that regardless of well secured their communication is,
  598. they will be safe to communicate over any network.
  599. This network encapsulation system could be used in other actors as well,
  600. allowing a lightweight and secure VPN system to built.
  601. % Web GUI used for managing the system.
  602. Any modern computer system must include a GUI,
  603. it is required by users.
  604. For this reason Blocktree includes a web-based GUI called \texttt{btconsole} that can
  605. monitor the system, provision runtimes, and configure access control.
  606. \texttt{btconsole} is itself implemented as an actor in the runtime,
  607. and so has access to the same facilities as any other actor.
  608. \section{Filesystem}
  609. % The division of responsibilities between the sector and filesystem services.
  610. The responsibility for serving data in Blocktree is shared between the filesystem and sector
  611. services.
  612. Most actors will access the filesystem through the filesystem service,
  613. which provides a high-level interface that takes care of the cryptographic operations necessary to
  614. read and write files.
  615. The filesystem service relies on the sector service for actually persisting data.
  616. The individual sectors which make up a file are read from and written to the sector service,
  617. which stores them in the local filesystem of the computer on which it is running.
  618. A sector is the atomic unit of data storage
  619. and the sector service only supports reading and writing entire sectors at once.
  620. File actors spawned by the filesystem service buffer reads and writes until there is enough
  621. data to fill a sector.
  622. Because cryptographic operations are only performed on full sectors,
  623. the cost of providing these protections is amortized over the size of the sector.
  624. Thus there is tradeoff between latency and throughput when selecting the sector size of a file:
  625. a smaller sector size means less latency while a larger one enables more throughput.
  626. % Types of sectors: metadata, integrity, and data.
  627. A file has a single metadata sector, a Merkle sector, and zero or more data sectors.
  628. The sector size of a file can be specified when it is created,
  629. but cannot be changed later.
  630. Every data sector contains the ciphertext of the number of bytes equal to the sector size,
  631. but the metadata and Merkle sectors contain a variable amount of data.
  632. The metadata sector contains all of the filesystem metadata associated with the file.
  633. In addition to the usual metadata present in any Unix filesystem (the contents of the \texttt{stat} struct),
  634. cryptographic information necessary to verify and decrypt the contents of the file are also stored.
  635. The Merkle sector of a file contains a Merkle tree over the data sectors of a file.
  636. The hash function used by this tree can be configured at file creation,
  637. but cannot be changed after the fact.
  638. % How sectors are identified.
  639. When sector service providers are contained in the same directory they connect to each other to form
  640. a consensus cluster.
  641. This cluster is identified by a \texttt{u64} called the cluster's \emph{generation}.
  642. Every file is identified by a pair of \texttt{u64}, its generation and its inode.
  643. The sectors within a file are identified by an enum which specifies which type they are,
  644. and in the case of data sectors, their 0-based index.
  645. \begin{verbatim}
  646. pub enum SectorKind {
  647. Meta,
  648. Merkle,
  649. Data(u64),
  650. }
  651. \end{verbatim}
  652. The byte offset in the plaintext of the file at which each data sector begins can be calculated by
  653. multiplying the sector's index by the sector size of the file.
  654. The \texttt{SectorId} type is used to identify a sector.
  655. \begin{verbatim}
  656. pub enum SectorId {
  657. generation: u64,
  658. inode: u64,
  659. sector: SectorKind,
  660. }
  661. \end{verbatim}
  662. % How the sector service stores data.
  663. The sector service persists sectors in a directory in its local filesystem,
  664. with each sector is stored in a different file.
  665. The scheme used to name these files involves security considerations,
  666. and is described in the next section.
  667. When a sector is updated,
  668. a new local file is created with a different name containing the new contents.
  669. Rather than deleting the old sector file,
  670. it is overwritten by the creation of a hardlink to the new file,
  671. and the name that used to create the new file is unlinked.
  672. This method ensures that the sector file is updated in one atomic operation
  673. and is used by other Unix programs.
  674. The sector service also uses the local filesystem to persist the replicated log it uses for Raft.
  675. This file serves as a journal of sector operations.
  676. % Types of messages handled by the sector service.
  677. Communication with the sector service is done by passing it messages of type \texttt{SectorMsg}.
  678. \begin{verbatim}
  679. pub struct SectorMsg {
  680. id: SectorId,
  681. op: SectorOperation,
  682. }
  683. pub enum SectorOperation {
  684. Read,
  685. Write(WriteOperation),
  686. }
  687. pub enum WriteOperation {
  688. Meta(Box<FileMeta>),
  689. Data {
  690. meta: Box<FileMeta>,
  691. contents: Vec<u8>,
  692. }
  693. }
  694. \end{verbatim}
  695. Here \texttt{FileMeta} is the type used to store metadata for files.
  696. Note that updated metadata is required to be sent when a sector's contents are modified.
  697. % Scaling horizontally: using Raft to create consensus cluster. Additional replication methods.
  698. A generation of sector service providers uses the Raft protocol to synchronize the state of the
  699. sectors it stores.
  700. The message passing interface of the runtime enables this implementation
  701. and the sector service's requirements were important considerations in designing this interface.
  702. The system currently replicates all data to each of the service providers in the cluster.
  703. Additional replication methods are planned for future implementation
  704. (e.g. erasure encoding and distribution via consistent hashing),
  705. which allow for different tradeoffs between data durability and storage utilization.
  706. % Scaling vertically: how different generations are stitched together.
  707. The creation of a new generation of the sector service is accomplished with several steps.
  708. First, a new directory is created in which the generation will be located.
  709. Next, one or more processes are credentialed for this directory,
  710. using a procedure which is described in the next section.
  711. The credentialing process produces files for each of the processes stored in the new directory.
  712. The sector service provider in each of the processes uses the filesystem service
  713. (which connects to the parent generation of the sector service)
  714. to find the other runtimes hosting the sector service in the directory and messages them to
  715. establish a fully-connected cluster.
  716. Finally, the service provider which is elected leader contacts the generation in the root directory
  717. and requests a new generation number.
  718. Once this number is known it is stored in the superblock for the generation,
  719. which is the file identified by the new generation number and inode 2.
  720. The superblock is not contained in any directory and cannot be accessed outside the sector service.
  721. The superblock also keeps track of the next inode to assign to a new file.
  722. % Authorization logic of the sector service.
  723. To prevent malicious actors from writing invalid data,
  724. the sector service must cryptographically verify all write messages.
  725. The process it uses to do this involves several steps:
  726. \begin{enumerate}
  727. \item The certificate chain in the metadata that was sent in the write message is validated.
  728. It is considered valid if it ends with a certificate signed by the root principal
  729. and the paths in the certificates are correctly nested,
  730. indicating valid delegation of write authority at every step.
  731. \item Using the last public key in the certificate chain,
  732. the signature in the metadata is validated.
  733. This signature covers all of the fields in the metadata.
  734. \item The new sector contents in the write message are hashed using the digest function configured
  735. for the file and the resulting hash is used to update the file's Merkle tree in its Merkle
  736. sector.
  737. \item The root of the Merkle tree is compared with the integrity value in the file's metadata.
  738. The write message is considered valid if and only if there is a match.
  739. \end{enumerate}
  740. This same logic is used by file actors to verify the data they read from the sector service.
  741. Only once a write message is validated is it shared with the sector service provider's peers in
  742. its generation.
  743. Although the data in a file is encrypted,
  744. it is still beneficial for security to prevent unauthorized principal's from gaining access to a
  745. file's ciphertext.
  746. To prevent this, a sector service provider checks a file's metadata to verify that the requesting
  747. principal actually has a readcap (to be defined in the next section) for the file.
  748. This ensures that only principals that are authorized to read a file can gain access to the file's
  749. ciphertext, metadata, and Merkle tree.
  750. % File actors are responsible for cryptographic operations. Client-side encryption.
  751. The sector service is relied upon by the filesystem service to read and write sectors.
  752. Filesystem service providers communicate with the sector service to open files and perform
  753. filesystem operations.
  754. These providers spawn file actors that are responsible for verifying and decrypting the information
  755. contained in sectors and providing it to other actors.
  756. They use the credentials of the runtime they are hosted in to decrypt sector data using
  757. information contained in file metadata.
  758. File actors are also responsible for encrypting and integrity protecting data written to files.
  759. In order for a file actor to produce a signature over the root of the file's Merkle tree,
  760. it maintains a copy of the tree in memory.
  761. This copy is read from the sector service when the file is opened.
  762. While this does mean duplicating data between the sector and filesystem services,
  763. this design was chosen to reduce the network traffic between the two services,
  764. as the entire Merkle tree does not need to be transmitted on every write.
  765. Encapsulating all cryptographic operations in the filesystem service and file actors allows the
  766. computer storing data to be different from the computer encrypting it.
  767. This approach allows client-side encryption to be done on more capable computers
  768. and low powered devices to delegate this task to a storage server.
  769. % Prevention of resource leaks through ownership.
  770. A major advantage of using file actors to access file data is that they can be accessed over the
  771. network from a different runtime as easily as they can be from the same runtime.
  772. One complication arising from this approach is that file actors must not outlive the actor which
  773. caused them to be spawned.
  774. This is handled in the filesystem service by making the actor who opened the file the owner of the
  775. file actor.
  776. When a file actor receives notification that its owner returned,
  777. it flushes any buffered data in its cache and returns,
  778. ensuring that a resource leak does not occur.
  779. % Encrypted metadata. Extended attributes in metadata. Cache control.
  780. Some of the information stored in metadata needs to be kept in plaintext to allow the sector
  781. service to verify and decrypt the file
  782. but most of it is encrypted using the same key as the file's contents.
  783. The file's authorization attributes, its size, and its access times are all encrypted.
  784. The table storing the file's extended attributes (EAs) is also encrypted.
  785. Cache control information is included in this area as well.
  786. It specifies the number of seconds, as a u32, that a file may be cached.
  787. The filesystem service uses this information to evict sectors from its cache when they have been
  788. cached for longer than this threshold,
  789. causing them to be reloaded from the sector service.
  790. % Authorization logic of the filesystem service.
  791. The filesystem service uses an \texttt{Authorizer} type to make authorization decisions.
  792. It passes this type the authorization attributes of the principal accessing the file, the
  793. attributes of the file, and the type of access (read, write, or execute).
  794. The \texttt{Authorizer} returns a boolean indicating if access is permitted or denied.
  795. These access control checks are performed for every message processed by the filesystem service,
  796. including opening a file.
  797. A file actor only responds to messages sent from its owner,
  798. which ensures that it can avoid the overhead of performing access control checks as these were
  799. carried out by the filesystem service when it was created.
  800. The file actor is configured when it is spawned to allow read only, write only, or read write
  801. access to a file,
  802. depending on what type of access was requested by the actor opening the file.
  803. % Streaming replication.
  804. Often when building distributed systems it is convenient to alert any interested party that an event
  805. has occurred.
  806. To facilitate this pattern,
  807. the sector service allows actors to subscribe for notification of writes to a file.
  808. The sector service maintains a list of actors which are currently subscribed
  809. and when it commits a write to its local storage,
  810. it sends each of them a notification message identifying the sector written
  811. (but not the written data).
  812. By using different files to represent different events,
  813. a simple notification system can be built.
  814. Because the contents of a directory may be distributed over many different generations,
  815. this system does not support the recursive monitoring of directories.
  816. Although this system lacks the power of \texttt{inotify} in the Linux kernel,
  817. it does provides some of its benefits without incurring much or a performance overhead
  818. or implementation complexity.
  819. For example, this system can be used to implement streaming replication.
  820. This is done by subscribing to writes on all the files that are to be replicated,
  821. then reading new sectors as soon as notifications are received.
  822. These sectors can then be written into replica files in a different directory.
  823. This ensures that the contents of the replicas will be updated in near real-time.
  824. % Peer-to-peer distribution of sector data.
  825. Because of the strong integrity protection afforded to sectors,
  826. it is possible for peer-to-peer distribution of sector data to be done securely.
  827. Implementing this mechanism is planned as a future enhancement to the system.
  828. The idea is to base the design on bit torrent,
  829. where the sector service responsible for a file acts as a tracker for that file,
  830. and the file actors accessing the file communicate with one another directly using the information
  831. provided by the sector service.
  832. This could allow the system to scale to a much larger number of concurrent reads by reducing
  833. the load on the sector service.
  834. % The FUSE daemon.
  835. Being able to access the filesystem from actors allows a programmer to implement new applications
  836. using Blocktree,
  837. but there is an entire world of existing applications which only know how to access the local
  838. filesystem.
  839. To allow these applications access to Blocktree,
  840. a FUSE daemon called \texttt{btfuse} is included which allows a Blocktree directory to be mounted
  841. to a directory in the local filesystem.
  842. This daemon can directly access the sector files in a local directory,
  843. or it can connect over the network to filesystem or sector service provider.
  844. This FUSE daemon could be included in a system's initrd to allow it to mount its root filesystem
  845. from Blocktree,
  846. opening up many interesting possibilities for hosting machine images in Blocktree.
  847. A planned future enhancement is to develop a Blocktree filesystem driver which actually runs in
  848. kernel space.
  849. This would reduce the overhead associated with context switching from user space, to kernel space,
  850. and back to user space, for every filesystem interaction,
  851. making the system more practical to use for a root filesystem.
  852. \section{Cryptography}
  853. This section describes the cryptographic mechanisms used to integrity and confidentiality protect
  854. files.
  855. These mechanisms are based on well-established cryptographic constructions.
  856. % Integrity protection.
  857. File integrity is protected by a digital signature over its metadata.
  858. The metadata contains the integrity field which contains the root node of a Merkle tree over
  859. the file's contents.
  860. This allows any sector in the file to be verified with a number of hash function invocations that
  861. is logarithmic in the size of the file.
  862. It also allows the sectors of a file to be verified in any order,
  863. enabling random access.
  864. The hash function used in the Merkle tree can be configured when the file is created.
  865. Currently, SHA-256 is the default, and SHA-512 is supported.
  866. A file's metadata also contains a certificate chain,
  867. and this chain is used to authenticate the signature over the metadata.
  868. In Blocktree, the certificate chain is referred to as a \emph{writecap}
  869. because it grants the capability to write to files.
  870. The certificates in a valid writecap are ordered by their paths,
  871. the initial certificate contains the longest path,
  872. the path in each subsequent certificate must be a prefix of the one preceding it,
  873. and the final certificate must be signed by the root principal.
  874. These rules ensure that there is a valid delegation of write authority at every
  875. link in the chain,
  876. and that the authority is ultimately derived from the root principal specified by the absolute path
  877. of the file.
  878. By including all the information necessary to verify the integrity of a file in its metadata,
  879. it is possible for a requestor who only knows the path of a file to verify that the contents of the
  880. file are authentic.
  881. % Confidentiality protecting files with readcaps. Single pubkey operation to read a dir tree.
  882. Confidentiality protection of files is optional but when it is enabled,
  883. a file's sectors are individually encrypted using a symmetric cipher.
  884. The key to this cipher is randomly generated when a file is created.
  885. A different IV is generated for each sector by hashing the index of the sector with a
  886. randomly generated IV for the entire file.
  887. A file's key and IV are encrypted using the public keys of the principals to whom read access is
  888. to be allowed.
  889. The resulting ciphertext is referred to as a \emph{readcap}, as it grants the capability to read the
  890. file.
  891. These readcaps are stored in a table in the file's metadata.
  892. Each entry in the table is identified by a byte string that is derived from the public key of the
  893. principal who owns the entry's readcap.
  894. The byte string is computed by calculating an HMAC of the the principal's public key.
  895. The HMAC is keyed with a randomly generated salt that is stored in the file's metadata.
  896. An identifier for the hash function that was used in the HMAC is included in the byte string so
  897. that the HMAC can be recomputed later.
  898. When the filesystem service accesses the file,
  899. it recomputes the HMAC using the salt, its public key, and the hash function specified in each entry
  900. of the table.
  901. It can then identify the entry which contains its readcap,
  902. or that such an entry does not exist.
  903. This mechanism was designed to prevent offline correlation attacks on file metadata,
  904. as metadata is stored in plaintext in local filesystems.
  905. The file key and IV are also encrypted using the keys of the file's parents.
  906. Note that there may be multiple parents of a file because it may be hard linked to several
  907. directories.
  908. Each of the resulting ciphertexts is stored in another table in the file's metadata.
  909. The entries in this table are identified by an HMAC of the parent's generation and inode numbers,
  910. where the HMAC is keyed using the file's salt.
  911. By encrypting a file's key and IV using the key and IV of its parents,
  912. it is possible to traverse a directly tree using only a single public key decryption.
  913. The file where this traversal begins must contain a readcap owned by the accessing principal,
  914. but all subsequent accesses can be performed by decrypting the key and IV of a child using the
  915. key and IV of a parent.
  916. Not only does this allow traversals to use more efficient symmetric key cryptography,
  917. but it also means that it suffices to grant a readcap on a single directory in order to grant
  918. access to the entire tree rooted at that directory.
  919. % File key rotation and readcap revocation.
  920. Because it is not possible to change the key used by a file after it is created,
  921. a file must be copied in order to rotate the key used to encrypt it.
  922. Similarly, revoking a readcap is accomplished by creating a copy of the file
  923. and adding all the readcaps from the original's metadata except for the one being revoked.
  924. While it is certainly possible to remove a readcap from the metadata table,
  925. this is not supported because the readcap holder may have used custom software to save the file's
  926. key and IV while it had access to them,
  927. so data written to the same file after revocation could potentially be decrypted by it.
  928. By forcing the user to create a new file,
  929. they are forced to re-encrypt the data using a fresh key and IV.
  930. % Obfuscating sector files stored in the local filesystem.
  931. From an attacker's perspective,
  932. not every file in your domain is equally interesting.
  933. They may be particularly interested in reading your root directory,
  934. or they may have identified the inode of a file containing kompromat.
  935. To make offline identification of which files sectors in the local filesystem belong to,
  936. an obfuscation mechanism is used.
  937. This works by generating a random salt for each generation of the sector service,
  938. and storing it in the generation's superblock.
  939. It is hashed along with the inode and the sector ID to produce the file name of the sector file
  940. in the local filesystem.
  941. These files are arranged into different subdirectories according to the value of the first two
  942. digits in the hex encoding of the resulting hash,
  943. the same way git organizes object files.
  944. This simple method makes it more difficult for an attacker to identify the files each sector belongs
  945. to
  946. while still allowing the sector service efficient access.
  947. % Credential stores.
  948. Processes need a way to securely store their credentials.
  949. They accomplish this by using a credential store,
  950. which is a type that implementor the trait \texttt{CredStore}.
  951. A credential store provides methods for using a process's credentials to encrypt, decrypt,
  952. sign, and verify data,
  953. but it does not allow them to be exported.
  954. A credential store also provides a method for generating new root credentials.
  955. Because root credentials represent the root of trust for an entire domain,
  956. it must be possible to securely back them up from one credential store to another.
  957. Root credentials can also be used to perform cryptographic operations without exporting them.
  958. A password is set when the root credentials are generated,
  959. and this same password must be provided to use, export, and import them.
  960. When root credentials are exported from a credential store they are confidentiality protected
  961. using multiple layers of encryption.
  962. The outer most layer is encryption by a symmetric key cipher whose key is derived from the
  963. password.
  964. a public key of the receiving credential store must also be provided when root credentials are
  965. exported.
  966. This public key is used to perform the inner encryption of the root credentials,
  967. ensuring that only the intended credential store is able to import them.
  968. Currently there are two \texttt{CredStore} implementors in Blocktree,
  969. one which is used for testing and one which is more secure.
  970. The first is called \texttt{FileCredStore},
  971. and it uses a file in the local filesystem to store credentials.
  972. A symmetric cipher is used to protect the root credentials, if they are stored,
  973. but it relies on the security of the underlying filesystem to protect the process credentials.
  974. For this reason it is not recommended for production use.
  975. The other credential store is called \texttt{TpmCredStore},
  976. and it uses a Trusted Platform Module (TPM) 2.0 on the local machine to store credentials.
  977. The TPM is used to generate the process's credentials in such a way that they can never be
  978. exported from the TPM (this is a feature of TPM 2.0).
  979. A randomly generated cookie is needed to use these credentials.
  980. The cookie is stored in a file in the local filesystem which its permissions set to prevent
  981. others from accessing it.
  982. Thus this type also relies on the security of the local filesystem.
  983. But, an attacker would need to steal the TPM and this cookie in order to steal a process's
  984. credentials.
  985. % Manual provisioning via the command line.
  986. The term provisioning is used in Blocktree to refer to the process of acquiring credentials.
  987. A command line tool call \texttt{btprovision} is provided for provisioning credential stores.
  988. This tool can be used to generate new process or root credentials, create a certificate request
  989. using them, issue a new certificate, and finally to import the new certificate chain.
  990. When setting up a new domain,
  991. \texttt{btprovision} can create a new sector storage directory in the local filesystem
  992. and write the new process's files to it.
  993. It is also capable of connecting to the filesystem service if it is already running.
  994. % Automatic provisioning.
  995. While manual provisioning is necessary to bootstrap a domain,
  996. an automatic method is needed to make this process more ergonomic.
  997. When a runtime starts it checks its configured credential store to find the certificate chain to
  998. use for authenticating to other runtimes.
  999. If no such chain is stored,
  1000. the runtime can choose to request a certificate from the filesystem service.
  1001. This is done by dispatching a message with \texttt{call} to the filesystem service without
  1002. specifying a scope.
  1003. Because the message specifies no path, there is no root directory to begin discovery at.
  1004. So, the runtime resorts to using link-local discovery to find other runtimes.
  1005. Once one is discovered,
  1006. the runtime connects to it anonymously
  1007. and sends it a certificate request.
  1008. This request includes a copy of the runtime's public key and, optional, a path where the
  1009. runtime would like to be located.
  1010. This path is purely advisory,
  1011. the filesystem service is free to place the runtime in any directory it sees fit.
  1012. The filesystem service creates a new process file containing the public key and marks it as
  1013. pending.
  1014. The reply to the runtime contains the path of the file created for it.
  1015. The operators of the domain can then use the web GUI or \texttt{btprovision} to view the request
  1016. and approve it at their discretion.
  1017. Assuming an operator approves the request,
  1018. it uses its credentials and the public key in the new process's file to issue a certificate
  1019. and then stores it in the file.
  1020. Authorization attributes (e.g. UID and GID) are also assigned to the process and written into its
  1021. file.
  1022. Note that a process's file is normally not writeable by the process itself,
  1023. so as to prevent it from setting its own authorization attributes.
  1024. Once these data have been written to the process file,
  1025. the runtime can read them to retrieve its new certificate chain.
  1026. It stores this chain in its credential store for later use.
  1027. The runtime can avoid polling its file for changes if it subscribes to write notifications.
  1028. The runtime must close the anonymous connections it made
  1029. and reconnect using the new certificate chain.
  1030. Once new connections are established,
  1031. it can read and write files using the authorization attributes specified in its file.
  1032. Note that this procedure only works when the runtime is on the same LAN as another runtime.
  1033. % The generation of new root credentials and the creation of a new domain.
  1034. The procedure for creating a new domain is straight-forward,
  1035. and all the steps can be performed using \texttt{btprovision}.
  1036. \begin{enumerate}
  1037. \item Generate the root credentials for the new domain.
  1038. \item Generate the credentials for the first runtime.
  1039. \item Create a certificate request using the runtime credentials.
  1040. \item Approve the request using the root credentials.
  1041. \item Import the new certificate into the credential store of the first runtime.
  1042. \end{enumerate}
  1043. The first runtime is configured to host the sector and filesystem services,
  1044. so that subsequent runtimes will have access to the filesystem.
  1045. After that, additional runtime on the same LAN can be provisioned using the automatic process.
  1046. % Setting up user based access control.
  1047. Up till now the focus has been on authentication and authorization of processes,
  1048. but it bears discussing how user based access control can be accomplished with Blocktree.
  1049. Because credentials are locked to the device on which they're created,
  1050. a user will have at least as many principals as they have devices.
  1051. But, all of these principals can be configured to have the same authorization attributes (UID, GID),
  1052. giving them the same permissions.
  1053. It makes sense to keep the files for all of the provisioned runtimes associated with a user in one
  1054. place
  1055. and the natural place is in the user's home directory.
  1056. Although every one of the user's processes needs to be provisioned,
  1057. this is not a huge limitation because a single runtime can host many different actors,
  1058. implementing many different applications.
  1059. Managing the users in a domain is facilitated by putting their home directories in a single user
  1060. directory for the domain.
  1061. Runtimes hosting the sector service on storage servers could then be provisioned in this directory
  1062. to provide the sector and filesystem services for the users' home directories.
  1063. It would be at the administrators discretion whether or not to enable client-side encryption.
  1064. If they wanted to,
  1065. the principal of at least one of a user's runtimes would need to be issued a readcap for the
  1066. user's home directory.
  1067. This runtime could then directly access the sector service in the domain's user directory.
  1068. By moving encryption onto the user's computer,
  1069. load can be shed from the storage servers.
  1070. Note that this setup does require all of the user's runtimes to be able to communicate with the
  1071. runtime whose principal was issued the readcap.
  1072. % Example of how these mechanisms allow data to be shared.
  1073. To illustrate how these mechanisms can be used to facilitate collaboration between enterprises,
  1074. consider a situation where two companies wish to partner to the development of a product.
  1075. To facilitate their collaboration,
  1076. they wish to have a way to securely exchange data with each other.
  1077. One of the companies is selected to host the data
  1078. and accepts the cost and responsibility of serving it.
  1079. The host company creates a directory which will be used to store all of the data created during
  1080. development.
  1081. The other company will connect to the filesystem service in the host company's domain to access
  1082. data in the shared directory.
  1083. Each of the principals in the other company which wish to connect request to be credentialed in the
  1084. shared directory.
  1085. The hosting company manually reviews these requests and approves them,
  1086. assigning each of the principals authorization attributes appropriate for its domain.
  1087. This may involve issuing UID and GID values to each of the principals, or perhaps SELinux contexts.
  1088. The actually set of attributes supported is determined by the \texttt{Authorization} type used by
  1089. by the filesystem service in the host company's domain.
  1090. Once the principals have their credentials,
  1091. they can dispatch messages to the filesystem service using the shared directory as the scope and
  1092. setting the rootward field to true.
  1093. This allows actors authenticating with the credentials of these principals to perform all filesystem
  1094. operations authorized by the hosting company.
  1095. This situation gives the hosting company a lot of control over the data.
  1096. If the other company wishes to protect its investment in the R\&D effort,
  1097. it should subscribe to write events on the shared directory and the files in it so that it can
  1098. copy new sectors out of the host company's domain as soon as they are written.
  1099. Note that although it is not possible to directly subscribe to writes on the contents of a
  1100. directory, by monitoring a directory for changes,
  1101. one can begin monitoring files as soon as they are created.
  1102. \section{Examples}
  1103. This section contains examples of systems that could be built using Blocktree.
  1104. The hope is to illustrate how this platform can be used to implement existing applications more
  1105. easily and to make it possible to implement systems which are currently out of reach.
  1106. \subsection{A distributed AI execution environment.}
  1107. Neural networks are just vector-valued functions with vector inputs,
  1108. albeit very complicated ones with potentially billions of parameters.
  1109. But, just like any other computation,
  1110. these functions can be conceptualized as computational graphs.
  1111. Imagine that you have a set of computers equipped AI accelerator hardware
  1112. and you have a neural network that is too large to be processed by any one of them.
  1113. By partitioning the graph into small enough subgraphs,
  1114. we can break the network down into pieces which can be processed by each of the accelerators.
  1115. The full network can be stitched together by passing messages between each of these pieces.
  1116. Let us consider how this could be accomplished with Blocktree.
  1117. We begin by provisioning a runtime on each of the accelerator machines,
  1118. each of which will have a new accelerator service registered.
  1119. Messages will be sent to the accelerator service describing the computational graph to execute,
  1120. as well as the name of the actor to which the output is to be sent.
  1121. When such a message is received by an accelerator service provider,
  1122. it spawns an actor which compiles its subgraph to a kernel for its accelerator
  1123. and remembers the name of the actor to send its output to.
  1124. An orchestrator service will be responsible for partitioning the graph and sending these messages.
  1125. Ownership of the actors spawned by the accelerator service is given to the orchestrator service,
  1126. ensuring that they will all be stopped when the orchestrator returns.
  1127. When one of the spawned actors stops,
  1128. it unloads the kernel from the accelerator's memory and returns it to its initial state.
  1129. Note that the orchestrator actor must have execute permissions on each of the accelerator runtimes
  1130. in order to send messages to them.
  1131. The orchestrator dispatches messages to the accelerator service in reverse order of the flow of data
  1132. in the computational graph,
  1133. so that it can tell each service provider where its output should be sent.
  1134. The actors responsible for the last layer in the computational graph send their output to the
  1135. orchestrator.
  1136. To begin the computation,
  1137. the actors which are responsible for input are given the filesystem path of the input data.
  1138. The orchestrator learns of the completion of the computation once it receives the output from
  1139. final layer.
  1140. It can then save these results to the file system and return.
  1141. Because inference and training can both be modeled by computational graphs,
  1142. this same procedure can be used for both.
  1143. \subsection{A decentralized social media network.}
  1144. One of the original motivations for designing Blocktree was to create a platform for a social
  1145. network that puts users in fully in control of their data.
  1146. In the opinion of the author,
  1147. the only way to actually accomplish this is for users to host the data themselves.
  1148. One might think it is possible to use client-side encryption to solve the privacy issue,
  1149. but this does not solve the full problem.
  1150. While it is true that good client-side encryption will prevent the service provider from reading
  1151. the user's data,
  1152. the user could still loose everything if the service provider goes out of business or simply
  1153. decides to stop offering its service.
  1154. Similarly, putting data in a federated system, as has been proposed by the Mastodon developers,
  1155. also puts the user at risk of loosing their data if the operator of the server they use decides to
  1156. shut it down.
  1157. To have real control the user must host the data themselves.
  1158. Then they decide how its encrypted, how its served, and to whom.
  1159. Let us explore how Blocktree can be used to build a social media platform which provides this
  1160. control.
  1161. To participate in this network each user will need to setup their own domain by generating new root
  1162. credentials
  1163. and provisioning at least one runtime to host the social media service.
  1164. A technical user could do this on their own hardware by reading the Blocktree documentation,
  1165. but a non-technical user might choose to purchase a new router with Blocktree pre-installed.
  1166. By connecting this router directly to their WAN,
  1167. the user ensures that the services running on it will always have direct internet access.
  1168. The user can access the \texttt{btconsole} web GUI via the router's WiFi interface to generate their
  1169. root credentials and provision new runtimes on their network.
  1170. A basic function of any social network is keeping track of a user's contacts.
  1171. This would be handled by maintaining the contacts as files in a well-known directory in the user's
  1172. domain.
  1173. Each file in the directory would be named using the user defined nickname for the contact
  1174. and its contents would include the root principal of the contact as well as any additional user
  1175. defined attributes,
  1176. such as address or telephone number.
  1177. The root principal would be used to discover runtimes controlled by the contact
  1178. so that messages can be sent to the social media service running in them.
  1179. When a user adds a new contact,
  1180. a connection message would be sent to it,
  1181. which the contact could choose to accept or reject.
  1182. If accepted,
  1183. the contact would create an entry in its contacts directory for the user.
  1184. The contact's social media service would then accept future direct messages from the user.
  1185. When the user sends a direct message to the contact,
  1186. its runtime discovers runtimes controlled by the contact and delivers the message.
  1187. Once delivered the contact's social media service stores the message in a directory for the user's
  1188. correspondence,
  1189. sort of like an mbox directory but where messages are sorted into directories based on sender
  1190. instead of receiver.
  1191. Note that this procedure only works if a contact's root principal can be resolved using the
  1192. search domain configured in the user's runtime.
  1193. We can ensure this is the case by configuring the runtime to use a search domain that operates
  1194. a Dynamic DNS (DDNS) service
  1195. and by arranging with this service to create the correct records to resolve the root principal.
  1196. The author intends to operate such a service to facilitate the use of Blocktree by home users,
  1197. but a more long-term solution is to implement a blockchain for resolving root principals.
  1198. Only then would the system be fully decentralized.
  1199. Making public posts is accomplished by creating files in a directory with the HTML contents of the
  1200. post.
  1201. This file, the directory containing it, and all parents of it,
  1202. would be configured to allow others to read, and in the case of directories, execute them.
  1203. At least one runtime with the filesystem service registered would need to have the execute
  1204. permission granted to others to allow anyone to access these files.
  1205. When someone wanted to view the posts of another user,
  1206. they would use the filesystem service to read these files from the well-known posts directory.
  1207. Of course user's would not be using a file manager to interact with this social network,
  1208. they would use their browsers as they do now.
  1209. This web interface would be served by the social media service in their domain.
  1210. A normal user who has a Blocktree enabled router would just type in a special hostname into their
  1211. browser to open this interface.
  1212. Because the router provides DNS services to their network,
  1213. it can generate the appropriate records to ensure this name resolves to the address where the social
  1214. media service is listening.
  1215. The social media service would be responsible for sending message to other user's domains to
  1216. get their posts,
  1217. and to read the filesystem to display the user's direct messages.
  1218. All this file data would be used to populate the web interface.
  1219. It is not hard to see how the same system could be used to serve any type of media: text, images,
  1220. video, immersive 3D worlds.
  1221. All of these can be stored in files in the filesystem,
  1222. and so all of them are accessible to Blocktree actors.
  1223. One issue that must be addressed with this design is how it will scale to a large number of users
  1224. accessing data at once.
  1225. In other words,
  1226. what happens if the user goes viral?
  1227. Currently, the way to solve this would be to add more computers to the user's network which run
  1228. the sector and filesystem services.
  1229. This is not ideal as it means the user would need to buy more hardware to serve their dank memes.
  1230. A better solution would be implement peer-to-peer distribution of sector data in the filesystem
  1231. service.
  1232. This would reduce the load on the user's computers and allow their follows to share the posted
  1233. data with each other.
  1234. This work is planned as a future enhancement.
  1235. \subsection{A smart lock.}
  1236. The access control language provided by Blocktree's filesystem can be used for more than just
  1237. authorizing access to data.
  1238. To illustrate this point,
  1239. consider a smart lock installed on the front door of a company's office building.
  1240. When the company first got the lock they used NFC to configure the lock
  1241. and connect it to their WiFi network.
  1242. The lock then used link-local runtime discovery to perform automatic provisioning.
  1243. An IT administrator accessed \texttt{btconsole} to approve the provisioning request
  1244. and position the lock in a specific directory in the company's domain.
  1245. Permission to actuate the lock is granted if a principal has execute permission on the lock's file.
  1246. To verify the physical presence of an employee,
  1247. NFC is used for the authentication handshake.
  1248. When an employee presses their NFC device, for instance their phone, to the lock,
  1249. it generates a nonce and transmits it to the device.
  1250. The device then signs the nonce using the credentials it used during provisioning in the company's
  1251. domain.
  1252. It transmits this signature to the lock along with the path to the principal's file in the domain.
  1253. The lock then reads this file to obtain the principal's authorization attributes and its public key.
  1254. It uses the public key to validate the signature presented by the device.
  1255. If this is successful,
  1256. it then checks the authorization attributes of the principal against the authorization attributes on
  1257. its own file.
  1258. If execute permissions are granted, the lock actuates, allowing the employee access.
  1259. The administrators of the company's domain create a group specifically for controlling physical
  1260. access to the building.
  1261. All employees with physical access permission are added to this group,
  1262. and the group is granted execute permission on the lock,
  1263. rather than individual users.
  1264. \subsection{A traditional three-tier web application.}
  1265. While it is hoped that Blocktree will enable interesting and novel applications,
  1266. it can also be used to build the kind of web applications that are common today.
  1267. Suppose that we wish to build a three-tier web application.
  1268. Let us explore how Blocktree could help.
  1269. First, let us consider which database to use.
  1270. It would be desirable to use a traditional SQL database,
  1271. preferably one which is open source and not owned by a large corporation with dubious motivations.
  1272. These constraints lead us to choose Postgres,
  1273. but Postgres was not designed to run on Blocktree.
  1274. However, Postgres does have a container image available on docker hub,
  1275. we can create a service to run this container image in our domain.
  1276. But Postgres stores all of its data in the local filesystem of the machine it runs on.
  1277. How can we ensure this does not become a single point of failure?
  1278. First, we should create a directory in our domain to hold the Postgres cluster directory.
  1279. Then we should procure at least three servers for our storage cluster
  1280. and provision runtimes hosted on each of them in this directory.
  1281. The sector service is registered on each of the runtimes,
  1282. so all the data stored in the directory will be replicated on each of the server.
  1283. Now, the Postgres service should be register in one and only one of these runtimes,
  1284. as Postgres requires exclusive access to its database cluster.
  1285. \texttt{btfuse} will be used to mount the Postgres directory to a path in the local filesystem
  1286. and the Postgres container will be configured to access it.
  1287. We now have to decide how other parts of the system are going to communicate with Postgres.
  1288. We could have the Postgres service setup port forwarding for the container,
  1289. so that ordinary network connection can be used to talk to it.
  1290. But we will have to setup TLS if we want this to be secure.
  1291. The alternative is to use Blocktree as a VPN and proxy network communications in messages.
  1292. This is accomplished by registering a proxy service in the same runtime as the Postgres service
  1293. and configuring it to allow traffic it receives to pass to the Postgres container on TCP port 5432.
  1294. In a separate directory,
  1295. a collection runtimes are provisioned which will host the webapp service.
  1296. This service will use axum to serve the static assets to our site,
  1297. including the Wasm modules which make up our frontend,
  1298. as well as our site's backend.
  1299. In order to do this,
  1300. it will need to connect to the Postgres database.
  1301. This is accomplished by registering the proxy service in each of the runtimes hosting the
  1302. webapp service.
  1303. The proxy service is configured to listen on TCP 127.0.0.1:5432 and forwards all traffic
  1304. to the proxy service in the Postgres directory.
  1305. The webapp can then use the \texttt{tokio-postgres} crate to establish a TCP connection to
  1306. 127.0.0.1:5432
  1307. and it will end up talking to the containerized Postgres instance.
  1308. Although the data in our database is stored redundantly,
  1309. we do still have a single point of failure in our system,
  1310. namely the Postgres container.
  1311. To handle this we can implement a failover service.
  1312. It will work by calling the Postgres service with heartbeat messages.
  1313. If too many of these timeout,
  1314. we assume the service is dead and start a new instance one of the other runtimes in the Postgres
  1315. directory.
  1316. This new instance will have access to all the same data the old,
  1317. including its journal file.
  1318. Assuming it can complete any in progress transactions,
  1319. the new service will come up after a brief delay
  1320. and the system will recover.
  1321. \subsection{A realtime geo-spacial environment.}
  1322. % Motivation
  1323. If we are to believe science fiction,
  1324. then the natural evolution of computer interaction is the development
  1325. of a persistent virtual world that we use to communicate, conduct business, and
  1326. enjoy our leisure.
  1327. This kind of system has been a dream for a long time,
  1328. but as it has grown closer to becoming a reality,
  1329. the popular consciousness has shifted against it.
  1330. People are rightly horrified by the idea of giving control over their virtual worlds to the same
  1331. social media company which has an established track record for causing societal harm.
  1332. But this technology does not need to be dystopian.
  1333. If an open system can be built, which actually works,
  1334. it can prevent the market from accepting a closed system designed to lock in user attention
  1335. and monetize them relentlessly.
  1336. This is the future,
  1337. it is only a question of who will own it.
  1338. % Coordinates
  1339. Let us explore how Blocktree could be used to build such a system.
  1340. The world we are going to render will be a planet with a roughly spherical surface and a
  1341. configurable radius $\rho$.
  1342. $\rho$ is a \texttt{u32} value whose units are meters.
  1343. We will use latitude ($\phi$) and longitude ($\lambda$) in radians to describe the locations of
  1344. points on the surface.
  1345. Both $\phi$ and $\lambda$ will take \texttt{f64} values.
  1346. The elevation of a point will be given by $h$,
  1347. which is the deviation from $\rho$.
  1348. $h$ is measured in meters and takes values in \texttt{i32}.
  1349. So, the distance from the center of the planet to the point ($\phi$, $\lambda$, $h$) is
  1350. $\rho + h$.
  1351. % Directory organization. Quadtrees.
  1352. The data describing how to render a planet consists of its terrain mesh, terrain textures, and
  1353. the objects on its surface.
  1354. This could represent a very large amount of data for a planet with realistic terrain populated by
  1355. many structures.
  1356. To facilitate sharding the information in a planet over many different servers,
  1357. the planet is broken into disjoint regions,
  1358. each of which is stored in its own directory.
  1359. A single top-level directory represents the entire planet,
  1360. and contains a manifest describing it.
  1361. This manifest specifies the planet's name, its radius, its rotational period,
  1362. the size of its regions in MB, as well as any
  1363. other global attributes.
  1364. This top-level directory also contains the texture for the sky box to render the view of
  1365. space from the planet.
  1366. In the future it may be interesting to explore the creation of more dynamic environments surrounding
  1367. the planet,
  1368. but a simple sky box has the advantage of being efficient.
  1369. The data in a planet is recursively broken into the fewest number of regions such that the
  1370. amount of data in each regions is less than a configured threshold.
  1371. When a regions grows too large it is broken into four new regions by cutting it along the
  1372. centerline parallel to the $\phi$ axis, and the one parallel to the $\lambda$ axis.
  1373. In other words, it is divided in half north to south and east to west.
  1374. The four new regions are stored in four subdirectories of the original region's directory
  1375. named 0, 1, 2, and 3.
  1376. The data in the old region is then moved into the appropriate directory based on its location.
  1377. Thus the directory tree of a planet essentially forms a quadtree,
  1378. albeit one which is built up progressively.
  1379. % Region data files.
  1380. In the leaf directories of this tree the actual data for a region are stored in two files,
  1381. one which describes the terrain and the other which describes objects.
  1382. It is expected that the terrain will rarely be modified,
  1383. but that the objects may change regularly.
  1384. The terrain file contains the mesh vertices in the region as well as its textures.
  1385. It is organized as an R-tree to allow for efficient spacial queries based on player location.
  1386. The region's objects file is also organized as an R-tree.
  1387. It contains all of the graphical data for the objects to be rendered in the region,
  1388. such as meshes, textures, and shaders.
  1389. % Plots.
  1390. The creation of a shared virtual world must involve players collaboratively building persistent
  1391. structures.
  1392. This is allowed in a controlled way by defining plot objects.
  1393. A plot is like a symbolic link,
  1394. it points to a file whose contents contain the data used to render the plot.
  1395. This mechanisms allows the owner of the planet to delegate a specific area on the surface
  1396. to another player by creating a plot defining that area and pointing it to a file owned by the
  1397. player.
  1398. The other player can then write meshes, textures, and shaders into this file to describe the
  1399. contents of the plot.
  1400. If the other player wishes to collaborate with others on the construction,
  1401. they can grant write access on the file to a third party.
  1402. This is not unlike the ownership of land in the real world.
  1403. % LOD files in interior directories.
  1404. To facilitate the viewing of the planet from many distances,
  1405. each interior node in the planet's directory tree contains a reduced level of detail (LOD) version
  1406. of the terrain contained in it.
  1407. For example, the top-level directory contains the lowest LOD mesh and textures for the terrain.
  1408. This LOD would be suitable for rendering the planet as a globe on a shelf,
  1409. or as it would appear from a high orbit.
  1410. By traversing the directory tree,
  1411. the LOD can be increased as the player travels closer to the surface.
  1412. This system assist with rendering an animation where the player appears to approach and land upon
  1413. the planet's surface.
  1414. % Sharding planet data.
  1415. By dividing the planet's data into different leaf directories,
  1416. it becomes possible to provision computers running the sector service in each of them.
  1417. This divides the storage and bandwidth requirements for serving the planet over this set of
  1418. servers.
  1419. In addition to serving these data,
  1420. another service is needed to keep track of player positions and execute game logic.
  1421. Game clients address their messages using the directory of the region their player is located
  1422. in, and set \texttt{rootward} to true.
  1423. These messages are delivered to the closest game server to the region the player is in,
  1424. which may be located in the region's directory or higher up the tree.
  1425. When a player transitions from one region to the next,
  1426. its game client begins addressing messages using the path of the next region as the scope.
  1427. \section{Conclusion}
  1428. % Blocktree serves as the basis for building distributed Unix.
  1429. There have been many attempts to create a distributed Unix over the years.
  1430. Time has shown that this is a very hard problem,
  1431. but time has not diminished its importance.
  1432. IT systems are more complex now than ever,
  1433. with many layers of abstraction which have built up over time.
  1434. We have suffered greatly from systems which were never designed to be secure on the hostile internet
  1435. that exists today.
  1436. Security has been bolted onto these systems (HTTPS, STARTTLS, DNSSEC)
  1437. in a backwards compatible way,
  1438. which results in weakened protections for these systems.
  1439. What's worse,
  1440. the entire trust model of the web relies on the ludicrous idea that there is a distinguished group
  1441. of certificate authorities who have the power to secure our communications.
  1442. We need to take a different approach.
  1443. Data should be certified by its path,
  1444. it must always be transported between processes in an authenticated manner,
  1445. and user code should never have to care how this is accomplished!
  1446. Time will tell whether the programming model of Blocktree is comprehensible and useful for
  1447. developers,
  1448. but the goal is to create the kind of easy to extend computing environment which allowed Unix to
  1449. be successful.
  1450. % The system enables individuals to self-host the services they rely on.
  1451. These days, the typical internet user stores all of their important data in the cloud with
  1452. third-party service providers.
  1453. They do this because of the convenience of being able to access this information from anywhere,
  1454. and because of the perceived safety in having a large internet company look after it for them.
  1455. This convenience comes at the price of putting users at the mercy of these companies.
  1456. Take email for example,
  1457. a service which is universally used for account recovery and password reset.
  1458. If a service provided decided to stop providing a user access to their email,
  1459. the user would be effectively cut off from any website which sends login verification emails.
  1460. This is not a hypothetical situation,
  1461. such an incident has occurred (TODO: INSERT CITATION FROM LVL1).
  1462. There is no technical reason for things to be this way.
  1463. Blocktree allows users to host their own services in their own domain.
  1464. If we can make setting up an email or VOIP server as simple as clicking a button in a web GUI,
  1465. their will be no convenience advantage to cloud services.
  1466. One challenge for self-hosting data is ensuring it is protected from loss when hardware inevitably
  1467. fails.
  1468. The data redundancy in Blocktree's sector layer ensures that the loss of any one storage
  1469. device will not cause data loss.
  1470. Streaming replication can also be used to maintain additional redundant copies.
  1471. If more users begin hosting their own services,
  1472. the internet will become more distributed,
  1473. which will make it more resistent to disruption and centralized control.
  1474. % Benefits to businesses.
  1475. Cloud computing has also driven changes to the way businesses acquire computing resources.
  1476. It is common for startups to rent all of their computing resources from one large cloud
  1477. provider.
  1478. There are compelling economic and technical reasons to do this,
  1479. but as a firm grows they often experience growing pains as their cloud bills also grow.
  1480. If the firm has not developed their software with a multi-cloud, or hybrid approach in mind,
  1481. they may face the prospect of major changes in order to bring their application on-prem or to a
  1482. rival cloud.
  1483. By developing their application on Blocktree,
  1484. businesses have a single platform to target which can run on rented computers in the cloud just as
  1485. easily servers in their own data center.
  1486. This ensures the choice to rent or buy can be made on a purely economic basis.
  1487. Blocktree is not the only system that provides this flexibility.
  1488. The portability of containers is one of the reasons they have become so popular.
  1489. Containers have their place and will most likely be used for years to come,
  1490. but they are a lower level abstraction which requires the developer to the problems that Blocktree
  1491. handles.
  1492. % Blocktree advances the status quo in secure computing.
  1493. Ransomware attacks and data breaches are embarrassingly common these days.
  1494. There are many reasons for this,
  1495. from the reliance on passwords for authentication, to the complexity of the software supply chain,
  1496. but it is clear that as IT professionals we need to do more to keep the systems under our
  1497. protection safe.
  1498. Blocktree helps to do this by solving many of the difficult problems involved with securing
  1499. communication on a hostile network.
  1500. It takes a true zero-trust approach,
  1501. ensuring that all communications between processes is authenticated using public key cryptography.
  1502. Data at rest is also secured with encryption and integrity protection.
  1503. No security system can prevent all attacks,
  1504. but by putting these mechanisms together in an easy to use platform,
  1505. we can advance the status quo of secure computing.
  1506. % Composability leads to emergent benefits.
  1507. When Unix was first developed in the 1970's, its authors could not have foreseen the applications
  1508. that would be enabled by their system.
  1509. Although there have been many different kinds of Unices over the years,
  1510. the core programming model, built around the filesystem, has remained since the beginning.
  1511. It is a testament to the importance of this abstraction that it has persisted for so long.
  1512. No designer can foresee all the ways that their abstractions will be used,
  1513. but they can try to build them in such a way that as much choice is left to the user as possible.
  1514. By making the actor model, and messaging passing, the core of Blocktree,
  1515. it is hoped that low overhead communication between distributed components can be achieved.
  1516. By using this system to provide a global distributed filesystem,
  1517. it is hoped that the interoperable sharing of data can be achieved.
  1518. And by using protocol contracts to constrain actor communication,
  1519. it is hoped that the structure and safety of type theory can bring order to distributed
  1520. computation.
  1521. While it is possible to see some of the applications that can be built from these abstractions,
  1522. it seems likely that their composability and the creativity of developers will enable systems that
  1523. cannot be foreseen.
  1524. \end{document}