I noticed this morning that several blogs I follow featured posts on Facebook’s new data center. While a new data center is typically not news worthy, Facebook’s approach is: they have released their custom server designs and data center designs to the broader community as the Open Compute Project. Their stated goal:
By releasing Open Compute Project technologies as open hardware, our goal is to develop servers and data centers following the model traditionally associated with open source software projects.
Our first step is releasing the specifications and mechanical drawings. The second step is working with the community to improve them.
Government IT managers, architects, and engineers should pay attention and participate in this group, and here’s why: Facebook has a profit motive to increase environmental efficiency and improve compute per processor efficiency. Therefore they invest R&D funding into their data centers and the results can be seen in these documents. Innovative server designs, streamlined motherboards, ethernet-powered rack lighting, efficient plant designs, etc. I have toured my fair share of data centers, both private and government-run; private data centers frequently restrict access to the floor and refuse to share specific details on infrastructure design and government data centers are typically less innovative than anything I see in these documents. My intent is not to knock government data centers but rather to point out that with limited funding they are competitive, and perhaps with the ability to replicate industry best practices provided in an open source format these data centers can drive down costs and improve services.
Vivek Kundra has mandated consolidation of government data centers: wouldn’t it be nice if we stopped engineering these data centers individually and instead followed an open, efficient, and proven data center design?
Leave a Reply
You must be logged in to post a comment.