Although large-scale Internet services such as eBay and Google Maps have revolutionized the Web, today it takes a large organization with tremendous resources to turn a prototype or idea into a robust distributed service that can be relied on by millions.

Our vision is to enable one person to invent and run the next revolutionary IT service, operationally expressing a new business idea as a multi-million-user service over the course of a long weekend. By doing so we hope to enable an Internet "Fortune 1 million".

To do this, we will systematize what has become the de facto standard process for developing, assessing, deploying, and operating such services, by bringing to bear powerful techniques from statistical machine learning (SML) as well as recent insights from networking and distributed systems.

Our platform is the modern datacenter. We see the “datacenter operating system” as a split between virtual machines to provide the OS mechanism and SML to provide the overarching policy. To inform the SML policy maker, we provide tools that collect sensor data from all the hardware and software components of the data center. To provide actions for the policy maker to take, we provide actuators to shutdown, reboot, or migrate services inside the datacenter. Additional technologies to fulfill the vision include workload generators and application simulators that can record behaviors of proprietary systems and then recreate them in a research environment.

Our plan is to borrow technology whenever possible; hence, we see Ruby on Rails as the likely programming language of the datacenter, abstractions like Chubby and MapReduce as the libraries of the datacenter, and datacenter storage via services like BigTable, Google File System, and Amazon’s Simple Storage Service.

When we cannot borrow technology, our guideline is to look for ways to leverage SML in the solution to the problem