HIGH-PERFORMANCE Computing (HPC) Facilities

Equipment Available (In-house and Off-Campus Access)

(1) Southern Illinois High-Performance Computing Research Infrastructure (SIHPCI). Dell Linux cluster (106 nodes, Intel Xeon dual CPU quad-core 2.3 GHz, 8 GB RAM, and 90 TB storage ) dubbed maxwell. It was awarded by NSF Computing Research Infrastructure (CRI) Division in 2009.

      Here is a link to SIHPCI Description and Tutorial on How to Use the Cluster

      Requesting an Account on Maxwell:

      Send an email to Professor Shaikh Ahmed (ahmed@siu.edu)

      Provide the following information:

     # Your Name

     # Affiliation

     # IP address of the computer(s) on SIUC network from which you would access Maxwell

     # A short paragraph on research you will perform and software tools to be used

     # You will receive an email with login information

     # Host: maxwell.ecehpc.siuc.edu
     # Please read throuh the tutorial (link given above) on how to use the machine
     # VERY IMPORTANT: PLEASE NOTE THAT ...
        (1) A user is NOT allowed to run simulations on the HEAD node. Long-run
        simulations found on the head node will be killed automatically. Only compute nodes
        should be used to run simulations. For this purpose you should use the "BSUB" submission script (please see the above tutorial)
        (2) System administrators are NOT responsible for any data loss and/or failure of the system at any time. They do NOT backup your data/programs.
        (3) It would be the user's responsibility to learn how to use the cluster properly and please be advised that the user should always transfer and backup     
        her/his data/program routinely in a local (PC/Mac/Linux) drive.

      For System Administrators:

      Here is the link to SIHPCI webpage (You can access it only from within SIUC network; From outside use VPN)

      Here is the real-time Ganglia report for maxwell cluster usage...(You can access it only from within SIUC network; From outside use VPN)

      The following links are for administrative purpose:

      Cluster Management Guide

      Platform Computing: http://maxwell.ecehpc.siuc.edu:8080/platform/

      Documentation: http://maxwell.ecehpc.siuc.edu/cgi-bin/portal.cgi

 

      

(2) In-house 25-node, dual CPU quad-core 2.3 GHz, 16GB RAM Intel Xeon cluster dubbed octopus. All 200 cores are dedicated to nanostructures modeling simulations. Here are the complete cluster specifications.

(3) Access to the NSF's nanoHUB.org computational workspace.

(4) Access to the NSF's TeraGrid computational workspace.

(5) Access to the Oak Ridge National Lab's HPC machines. Here is the Information page for Jaguar@ORNL. Here is the Jaguar User Guide. ORNL Science Links: materials@ORNL  DoE@ORNL  materials for harsh environments

(6) Access to the UTK's NICS Cray XT5 machines.

(7) Access to in-house Synopsis and Cadence design tools.

 

 

 

Computing Research laboratory: Shaikh Ahmed Research Group

A large, recently renovated modern research laboratory of about 1100 sq. ft. in the Engineering building at SIUC. The lab has office spaces and setups (PCs, printers, etc.) for at least 10 personnel.

About Ahmed Group HPC Cluster: octopus

This Apex cluster has been purchased from the Advanced Clustering Technologies (ACT), Inc. The Apex line of high performance computing (HPC) cluster is designed to be complete turn-key systems. An Apex Cluster is not just some nodes, it includes everything one needs to have a working system including:

        # Head and Compute Nodes

        # Storage

        # Networking / Interconnect

        # Pre-cabled rack cabinet

        # Power distribution

        # Operating system and Cluster management software

        # Support and warranty packages 

Advanced Clustering Technologies 1U chassis for the 1X5001 Series of servers (please see the Figure below) has been specifically designed and thermally tested. This chassis has a 560 w power supply that is 80%+ efficient to help reduce the power draw and thermal load created by the system. Also, the memory in the machine is using 2GB 1.5 volt DDR2 FB-DIMM. Using the 1.5 volt memory instead of the more common 1.8 volt memory reduces the total power consumption of the machine. This in turn reduces the total BTU created by each machine, thus less air conditioning and power is used to maintain the cluster.

 

Here is the real-time Ganglia report for our octopus cluster usage...

 

 

 

Last Updated: March 22, 2013. Copyright 2011 Department of Electrical and Computer Engineering: 1230 Lincoln Drive MC 6603, Carbondale, IL 62901. All rights reserved.