Stories

Built for Speed on the University of Texas Campus

Eight rows of servers, in-row coolers and power distribution units are all interconnected to create what was ranked as the seventh-fastest supercomputer in the world. (Photo by Thomas McConnell)
Representing DPR’s 30th project for UT, the expansion encompasses approximately 10,000 sq. ft. of high-density data center space that ties into an existing research office. (Photo by Thomas McConnell)
The “heart” of the project, the CUP, encompasses an exterior transformer yard, an internal electrical room and a mechanical room. (Photo by Thomas McConnell)
Eight rows of servers, in-row coolers and power distribution units are all interconnected to create what was ranked as the seventh-fastest supercomputer in the world. (Photo by Thomas McConnell)
Representing DPR’s 30th project for UT, the expansion encompasses approximately 10,000 sq. ft. of high-density data center space that ties into an existing research office. (Photo by Thomas McConnell)
The “heart” of the project, the CUP, encompasses an exterior transformer yard, an internal electrical room and a mechanical room. (Photo by Thomas McConnell)

Speed and performance were the key drivers behind the design and construction of an advanced computing facility housing what was ranked as the seventh-fastest supercomputer in the world at the University of Texas’ (UT) J.J. Pickle Research Campus.

Team Players

Project: Texas Advanced Computing Center

Customer: Founded in 1883, the University of Texas at Austin has 17 colleges and schools, about 24,000 faculty and staff, and more than 50,000 students.

Architect: Atkins

MEP Engineer: HMG & Associates, Inc.

Project Highlights:

  • The 10,000-sq.-ft. high-density data center space expansion includes a 3,000-sq.-ft. seminar room and central plant.
  • The facility is the seventh most powerful commercially available computer system in the world, as ranked by TOP500 Supercomputer Sites.
  • The complex project was turned over in less than 10 months.

Representing DPR’s 30th project for UT, the expansion encompasses approximately 10,000 sq. ft. of high-density data center space that ties into an existing research office. It also includes a 3,000-sq.-ft. seminar room and a central plant that provides 3,750 tons of cooling and 6.2 megawatts (MW) of power to the computer facility.

The emphasis on speed manifested itself not only in the facility supporting one of the world’s fastest supercomputers but also in the intensive pace of turning over the complex project in less than 10 months. Throughout construction, some $50 million in funds hung in the balance, contingent on the end user, the Texas Advanced Computing Center (TACC), meeting specified performance milestones and benchmarks for the supercomputer that would fulfill requirements to receive a National Science Foundation (NSF) grant. “There were a lot of eyes, a lot of political weight, invested in this project, so we got a lot of attention to make sure we hit the benchmarks,” said DPR Project Manager Lewis Liu.

Not only did the UT project meet those funding-contingent performance goals, but it also became a model for safety. At substantial completion in August 2012, after more than 125,000 worker hours logged, there were zero recordable and lost-time incidents, and the project won the prestigious UT STEP Silver award for meeting stringent safety goals.

“The team performed very well on a very tight schedule,” said Pawn Chulavatr, project manager for UT’s Office of Facilities Planning and Construction. “We were able to turn the computing center over to our customer, the Texas Advanced Computing Center, for rack installation and benchmark testing, and met the NSF grant’s funding deadline.”

Liu attributes the project’s success to a highly collaborative team focused on common goals. “Constant communication was vital to making sure we were all on the right path and same page,” he said. “There was a lot of planning and coordination, especially with two different commissioning agents on the project. The teamwork was great.”

Strong communication was particularly integral to tackling the many technical coordination aspects of the project. Within the machine room, which houses the new supercomputer, there are 152 in-row coolers and 42 four-inch conduits that feed over from the central utility plant (CUP) to power the machines. One unique feature is a mineral oil cooling rack, designed to cool the system with much greater efficiency than a conventional chilled-water system by submerging the computer elements in a mineral oil bath.

The “heart” of the project, the 8,675-sq.-ft. CUP, encompasses three primary components: an exterior transformer yard; an internal electrical room, which required complex coordination and routing of the 42 conduits through the parking lot and then feeding into the machine room; and a mechanical room housing three 1,250-ton chillers and nine pumps. A stainless steel and fiberglass, three-cell cooling tower, along with a 1.2 million gallon thermal energy storage tank, round out the major components.

From the beginning the team knew the project’s sheer complexity, coupled with the accelerated schedule, would require extra time commitments from key subcontractors. Those schedule constraints were compounded when 35 critical path rain days occurred and had to be recovered with no impact to the schedule. The team rallied. In addition to long hours, the team used building information modeling (BIM) for extensive prefabrication. DPR also self-performed some work and brought in extra support personnel to keep the project on track for successful completion.

“There were many obstacles that the team overcame dealing with weather, long-lead equipment, subcontractor performance and coordination with multiple owners’ commissioning agents,” said Chulavatr. “The team performed well and delivered an outstanding project to the university.”