Boston University | Center for Computational Science
HomeNews and EventsResearchEducationPeopleSeminarsFacilitiesContact Us

High Performance Computing and Communications Facilities

Shared Computing Cluster

The Boston University Shared Computing Cluster (SCC) is the first Boston University high performance computing resource located off of the Boston University campus. The SCC is located in Holyoke, MA, site of the Massachusetts Green High Performance Computing Center (MGHPCC) where energy is plentiful, clean, and inexpensive. Two 10Gigabit Ethernet network connections between the MGHPCC and the BU campus provide extremely fast data transfer between the two locations.

SCC

The SCC is a heterogeneous Linux cluster composed of both shared and buy-in components. The system currently includes 1408 shared processors, 2212 buy-in processors, a combined 232 GPUs, 23 TB of memory, and nearly a petabyte of storage for research data.

The first components of the system were delivered to the MGHPCC in January, 2013 and went into production use on June 10, 2013. Over the summer of 2013, the newer Katana nodes as well as various departmental and buy-in nodes were integrated into the system. Since then the system has been expanded with elements of the BUMC LinGA system and additional buy-in nodes and these expansions will continue into the future.

The cluster includes nodes in a number of different configurations, detailed on the technical summary page. Two sets of nodes in the cluster incorporate a combined 232 GPUs (graphics processing units) for research computing use.

SCC’s operating system is Linux Centos 6.4.
System Access

All users with SCF accounts can log in using ssh to the SCC. Your login and password are the same as on previous SCF systems and the login nodes are scc1.bu.edu, scc2.bu.edu, geo.bu.edu (for Earth & Environment department Buy-In users), and scc4.bu.edu (for BUMC users with dbGaP data). If you have a valid SCF account, you can use either your BU Kerberos password or local SCF Linux password (for those who have one) to log in.

There will be regular updates posted here on the status of the SCC. We also have a regularly updated page listing the software installed on the SCC.
Data Access

Your home directory on the SCC has a 10GB quota.

All projects have Project Disk Space in /project/, /projectnb/, /restricted/project/, and/or /restricted/projectnb/ allocated to them. All SCC projects have at least 50GB of backed up space and 50GB of not backed up space. Additional space can be requested by the project Principal Investigator (or Administrative Contact if there is one) by following the appropriate link here. Requests for over 1 TB of space will generally incur a charge.

Those with Data Archive space will also be able to access it from the SCC starting at /archive/, as long as access from the SCC has been granted to your project/group’s space. This is done on an individual project/group basis.
Usage Policies and Batch System Information

Jobs running on the login nodes are limited to 15 minutes of CPU time and a small number of processors. Jobs on the batch nodes have a default wall clock limit of 12 hours but this can be increased, depending on the type of job. Single processor (serial) and omp (multiple processors all on one node) jobs can run for 720 hours, MPI jobs for 120 hours, and jobs using GPUs are limited to 48 hours. Use the qsub option -l h_rt=HH:MM:SS to ask for a higher limit. An individual user is also only allowed to have 256 processors simultaneously in the run state; this does not affect job submission. These limitations apply to the shared SCC resources. Limitations on Buy-In nodes are defined by their owners.

CPU usage on the SCC is charged by wall clock time. Thus if you request 12 processors and your code runs for 10 hours, you will be charged for the full 120 hours (multiplied by the SU factor for the node(s) you are running on) even if your actual computation only ran for, say, 30 hours.

The batch system on the SCC is OGS which is an open source version of the SGE batch system.
Transitioning to the SCC

This page on porting your code to the SCC should be particularly helpful to users of the now decommissioned Katana cluster. Details on running jobs and programming for the SCC are also available.

A list of available software on the SCC is available.
Help Information

This page provides basic information on the SCC. For additional information, please follow the sidebar links.

For general questions or to report system problems, please send email to help@scc.bu.edu.

For more information or help in using or porting applications to the SCC, please see our Scientific Programming Consulting page.

If you have questions regarding your computer account or resource allocations, please send E-mail to scfacct@bu.edu.

 

3 Cummington Street, Boston, MA 02215 - tel: 617-353-6078 - fax: 617-358-2487
Page last updated on February 10, 2014. Please send comments to Cheryl Endicott.

Home

copyright © 2006, Center for Computational Science | Boston Unversity , MA, 02215