Founder at Arkhai, research at the intersection of distributed computing, game theory, and autonomous agent systems.
Hi, I'm Levi. I'm the founder of Arkhai, where I build the foundation for the machine economy. My background is in game theory, distributed computing, autonomous agent systems, energy-water economics, energy-compute economics, and digital twins and simulations. At a high level, my work focuses on designing incentive structures in machine economies.
I started a company now called Arkhai, which creates building blocks for the machine economy.
The future of commerce is agentic, but our marketplaces are built for humans. An economy where the primary buying and selling choices are made by agents requires a rethinking of how discovery, market-making, settlement, and negotiation will occur.
Building blocks to create different kinds marketplaces easily. This saves time and effort for creating marketplaces.
This is decentralized finance; it differs from traditional "DeFi" in that we do not default to using use automated market makers, auctions, or solvers, and instead focus on mass-scale bilateral and multilateral negotiation for market-making as the starting point.
Worked on some cool crypto stuff, including protocols to improve incentives in academic environments using prediction markets, the economics of long-term data storage, and mechanism design for distributed computing. The most interesting work was in the game theory of optimistic verification at Protocol Labs.
Problem: in a two-sided marketplace for compute, how can the buyer/outsourcing party be convinced that the seller behaved honestly and returned the correct result, and where the only tool available to check whether the solution is correct is to redo the computation (under the assumption of reproducibility)?
The existing literature on the problem did not address real-world conditions - almost all of the existing research relied on analytic solutions to find the desirable equilibria and so on, but the underlying issue is that it made assumptions that didn't hold in the real world, meaning the mathematical guarantees they provided would break down in the real world.
I proposed a more empirical framework [14], akin to game-theoretic white-hat hacking, where agents are trained using multi-agent reinforcement learning to maximize their utilities in these environments, and then testing various anti-cheating mechanisms to see which of them were robust enough [23] [24].
In the summer of 2017 I discovered BOINC and Gridcoin. The former is a distributed computing platform that grew out of SETI@home, and which connected scientists requiring massive embarrassingly parallel compute resources with volunteers who had idle computing resources. Gridcoin was a cryptocurrency that rewarded contributions to scientific computing projects like SETI@home, BOINC, and Folding@home. I became really interested in its reward structure and started doing independent research in mechanism design for distributed computing [3] [4] [5].
How to design a "get out what you put in"-style reward rule for scientific computing projects considering almost any kind of computation using almost any subset of almost any kind of computer. Similar to Bitcoin's reward mechanism, except replacing the proof-of-work puzzle with scientific computations.
I discovered a generalization of Bitcoin's proportional allocation rule [12] to highly heterogeneous hardware and software environments [6]. The reward mechanism used a compute index over the theoretical maximum output of the network of machines on every different type of task distributed on the network. The core algorithm incentivizes using hardware in the manner which maximizes useful work done relative to the capabilities of the rest of the network, and is intrinsically related to the energy consumption of the machines on the network.
This involved messing around with GPUs a bit, putting together two GPU servers from scratch (AMD FirePros + 1028 Series from Supermicro for maximing double-precision FLOPS for the cost), bought a milkcrate mining rig, and SBC (Odroid). When I was selling the consumer cards, I also learned how to refurbish GPUs, which was fun. I played around a bit with CUDA/OpenCL, translating the 2D Ising model from CUDA to OpenCL [34].
                    
                    
                    
                    
                    
                At the time my primary motivation was environmentalism. I was in university and majored in physics, chemistry, and math, originally intending to do research in materials science for green tech. I tried out research in the intersection of computer science and environmental engineering, which is how I fell in love with algorithms.
The main topic of my research was how to optimize power plant production on river networks [1] [10] under thermal pollution constraints. Power plants on river networks heat up the water they use for cooling, and that impacts downstream plants, and there are also legal and environmental limits on the temperature the river can reach.
I discovered algorithms that were optimal in the case of linearly adjustable power plant production, and came up with generalizations of the Knapsack problem for the discrete case.
Many environmental problems indeed needed better technological solutions, but many of society's problems more generally stem from bad incentive structures.
I did some research in biochemistry, designing algorithms to search for chemical compounds in large databases [11].
In my personal capacity, I became very interested in game theory and its applications in solving human coordination problems in cooperative contexts.