Facebook IconTwitter IconLinkedIn IconFlickr IconYouTube IconRSS Feed Icon
 |  
sysnetwebmailadmin

About ICES Facilties

ACES Building
POB Building

ICES and the POB Building

The Institute for Computational Engineering and Sciences (ICES) is located in The O'Donnell Building for Applied Computational Engineering and Sciences (POB) on the University of Texas at Austin main campus. This facility has offices and work areas equipped with desktop computers, printers and copiers, mini-clusters, computational visualization facilities, and extensive network access for faculty, staff, students, and visitors. A large machine room houses supercomputers, servers, and large-scale storage devices. The building has a 196-seat auditorium with Ethernet ports at each seat. The auditorium also furnishes wireless networking, video conferencing and remote learning capabilities. There are eighteen networked seminar rooms with high-resolution audio visual systems, some with video conferencing and video taping facilities.

Networking Infrastructure

The POB building networks are designed to support both bandwidth-intensive computational research and to accommodate new technology when available. The networks are built around high-performance, multilayer Cisco 6509, 2960 and 4003 network switches, with Lucent Gigaspeed copper Ethernet and Multimode Fiberoptic to each desktop and work area. Wireless networking is available throughout the building and courtyard area.

Workstation Environment

The ICES workstation environment encompasses all offices, cubicles, work areas, and laboratories. Over 300 general-purpose workstations are available, including Linux-based PCs, Macs, and Windows PCs. Several color printers and scanners are available. File and email service is provided by a number of Linux servers with over 25 terabytes of disk storage. Other Mac and Linux-based computers function as web servers, LDAP authentication servers, domain name servers, directory servers, application servers, and compute servers.

On-site Linux-Based Clusters

ICES systems and networking team currently supports nine Linux-based Clusters with others in the planning and design stages. The Center for Subsurface Modeling has two clusters; a 184 core cluster (Bevo2), and a 180 core cluster (Bevo3). The Computational Visualization Center has a 64 core rendering cluster (Prism2). The Center for Computational Materials has a 16-node compute cluster (Deanston). The Center for Computational Molecular Sciences has a 40-node (Muskoka) cluster. The Center for Computational Life Sciences and Biology has two clusters; a 184 core cluster (Junior), and a 512 core cluster (Stampede). The Computational Mechanics Group has a 256 core cluster (Reynolds). The Institute also has a 46 core cluster built from recycled desktops (Algol).

Off-site Supercomputing Facilities

ICES has access to supercomputing facilities via high-speed networking at the Texas Advanced Computing Center (TACC) at the J. J. Pickle Research Center, eight miles north of the main campus. At TACC, the two primary HPC production systems include: the Lonestar cluster, with 1,888 Dell M610 PowerEdge blade servers, and peak performance of 302 Tflop/s; and a Sun Constellation Linux cluster, Ranger, one of the most powerful computers for open academic research in the world. Ranger has 62,976 AMD Opteron processing cores, 123 TB of memory, 1.73 PB of on-line disk storage, and a peak performance of 579 Tflop/s.

As part of the Lonestar system described above, ICES researchers also have priority access to approximately 20 million CPU hours in a separate queue at TACC. Compute cycles in this queue are managed by the Institute with allocations awarded on a weekly basis.

The long-term storage solution at TACC is an Oracle Mass Storage Facility, called Ranch. Ranch utilizes Oracle's Storage Archive Manager Filesystem for migrating files to/from a tape archival system with a current storage capacity of 30 PB. Two Oracle SL8500 Automated Tape Library devices house all of the off-line archival storage. Each SL8500 library contains 10,000 tape slots and 64 tape drive slots. Two types of tape media are available, capable of holding 1 terabyte and 5 terabytes of compressed data per tape.

TACC systems also include Corral designed to support data-centric science. Corral consists of 1.2 PB of online disk and a number of servers providing high-performance storage for all types of digital data. The system supports MySQL and Postgres databases, a high-performance parallel file system, web-based access, and other network protocols for storage and retrieval of data to and from sophisticated instruments, and HPC simulations. Visualization Lab Layout

POB Visualization Laboratory

The POB Visualization Laboratory, managed by TACC, provides an end-to-end infrastructure for data-intensive and display-intensive computing and is available to all UTA investigators as well as UT System users. The lab includes a Dell visualization cluster, Stallion, with 24 nodes and a 15x5 - 307 megapixel tiled display; Bronco a Sony 9M pixel flat projection system driven by a high-end Dell workstation; Lasso, a 12-megapixel touch sensitive display screen; and Mustang, a 73-inch Mitsubishi DLP flat-panel TV with active 3D stereo capabilities. These systems provide a unique environment for interactive and immersive visual exploration.

Brief descriptions of the equipment and different sections of the Vislab are given below.

Dell Visualization Cluster and 307 Megapixel Tiled Display (Stallion)Visualization Cluster

The Stallion cluster provides users with the ability to perform visualizations on a large 15 ft. x 5 ft. tiled display of Dell 30-inch flat panel monitors, for 307 megapixel resolution. This configuration allows for exploration of visualizations at an extremely high level of detail and quality. The cluster allows users to access to over 36GB of graphics memory, 108GB of system memory, and 100 processing cores. This setup enables the processing of datasets of a massive scale, and the interactive visualization of substantial geometries. A large, shared file system is included in order to allow for the storage of terascale size datasets.

Sony SRX-S105 (9M Pixel) Projection System (Bronco)Bronco Projector

The Sony projection system, Bronco, features a 20 ft. x 11 ft., 4096 x 2160 resolution flat-screen display, driven by a Sony SRX-S105 overhead projector and a high-end Dell workstation. This configuration provides users with the added flexibility to run a wide variety of applications, as only one workstation is required to drive the display. The projector gives exceptional brightness and a high resolution, 9M pixel viewing area. In addition, Bronco may be configured to accept inputs from up to four simultaneous video sources, allowing for a hybrid display of multiple systems.

DLP Flat Panel TV with 3D Stereo (Mustang)

The Vislab also includes Mustang, a 73-inch Mitsubishi DLP flap-panel TV with active 3D stereo capability connected to a high-end Dell workstation. This system enables researchers to see data in stereo for improved spatial analysis.

12-Megapixel Touch Display System (Lasso)

Lasso is a touch display system consisting of six - 46 inch HD thin-bezel displays driven by a single compute node. The compute node features AMD Eyefinity technology for a seamless display surface, allowing for a tiled-display environment without the need to write parallel graphics applications. The display surface is supplemented by an infrared touch-sensitive perimeter with 5mm touch precision and the capability to detect 32 touch points simultaneously. The system has depth-sensing camera technology to detect a human presence and extract space-based gestures from a person's movement, allowing for gesture-based control of the system.

Collaboration Room (Saddle)Saddle Conference Room

The collaboration room offers the opportunity for small groups to work together on developing and exploring visualizations. The display is provided by a high resolution projector with many possible input combinations. The room also includes a 5.1 theater stereo system with Blu-Ray capability. Users may develop their visualizations in the room, and then easily transition them to one of the two larger display systems in the main lab area at a later time.