High Performance Computing Facility


hpc

WELCOME TO UPOE HPC CLUSTER

The University High Performance Computing Facility at Hall No.7, in the School of Information Technology, was funded from the UGC UPOE program, and envisaged as an important tool to enable us to raise the level of our research to remain academically competitive, especially for research problems which involved large data sets, and numerical calculations. The facility has recently been upgraded with a 256-processor Sun cluster.

The HPCF Centre is conceived of as a functionally distributed supercomputing environment, housing leading-edge computing systems, with sophisticated software packages, and connected by a powerful high-speed fibre-optic network. The computing facilities are connected to the campus LAN, WLAN and also to the Internet.

UPOE HPC Cluster has been installed and maintained by C-DAC. Its peak performance is 1.3 Teraflops and the technology which is used to build this cluster is ROCKS version 5.2 and the scheduler used is Sun Grid Engine, by using this scheduler we can manage the user applications and we can implement the policies, it is a very powerful tool. In order to provide the low latency we have used the separate switches for MPI, Storage and IPMI. In this UPOE cluster we have attached the storage model: storagetek 5220And the total avail storage is 4TB.


Contact Person-

Dr. Supratim Sengupta

Dr. Narinder Singh Sahni

Prof. Ramakrishna Ramaswamy


hpc




    • Processor : AMD OPETRON 2218 DUAL CORE DUAL SOCKET
    • NO. of Master Nodes : 1
    • NO. of Computing Nodes : 64
    • Operating System : CENT OS 5.3
    • CLUSTER Software : ROCKS version 5.2
    • SERVER  Model : SUNFIRE X4200 (1 NO)
    • Compute Node Model : SUNFIRE X2200 (64 NO)
    • NAS Appliance Model : Storage tek 5220
    • Total Peak Performance : 1.3 T. F
 
    • No of nodes 64
    • Memory RAM 4 GB
    • Hard Disk Capacity/each node : 250GB
    • Storage Cap. 4 TB
    • No .of processors and cores: 2 X 2 = 4(dual core + dual socket)
    • CPU speed : 2.6 GHz
    • No. of floating point operations per seconds for AMD processor: 2 (since it is a dual core)
    • Total peak performance : No of nodes X No .of processors and cores X Cpu speed X No of floating point operations per second = 64 X 4 X 2.6GHz X 2 = 1.33 TF
    • Ganglia : monitoring tool
    • PVM : parallel virtual machine
    • HPC software : High performance LINPAC (performance testing tool )
    • Software's used in HPC cluster
    • R, Amber + Q.C tools,CID in RNA, GRID, GOLPE, ALMOND, MOKA, VOLSURF, METASITE, HMMER, INFERNAL, BLAST, MATLAB, GNUPLOT, TEIRESIAS, OPENEYE, ADF, AUTODOCK, GROMACS etc.,
  • The 411 Secure Information Service provides NIS-like functionality for Rocks clusters.

      • Sun Grid Engine: Job scheduler software tool, in this we have already implemented the fair share policy i.e. all users can get equal priority, using this we can submit batch and parallel jobs
      • Open MPI Lam MPI
      • C, C++, FORTRAN compilers (both GNU AND INTEL)
      • Bio roll: for Bio-Chemical applications

    USER RULES for HPCF, SIT, JNU :

    1. Every user who has account in HPCF can use the cluster 24 X 7 (for either serial or parallel jobs).
    2. The maximum number of jobs one can submit at a time is 10. If someone needs to submit jobs more than 10, he/she should contact the HPCF staff to get permission. Permission may be granted depending on the usage of HPCF at that time.
    3. The time limit of the jobs is, at present, unlimited.
    4. Interactive jobs will be allowed only for compilation and testing of new software. For the details contact the HPCF staff.
    5. To get a new account, electronic form should be downloaded from the HPCF web site (http://172.16.5.101/HPCF/) and the hard copy of the same should be submitted to the HPCF staff.
    6. At a time a job of maximum memory usage 4 GB can be accommodated. Please restructure your jobs keeping in mind that the RAM capacity of our cluster is 4GB per node.
    7. No back up of data is done in the cluster. The user is responsible for backing up her own data.
    8. Presently according to HPCF rule, the scheduler runs using FIFO( First in First out) policy.
    9. For installation of new programs, if one submits their programs with proper instructions regarding installation etc. system admin will install the selected program. If users want to install any new software themselves, they must contact the HPCF lab and they will be provided help to install their software. All the SW must have either open license or procured license, the information regarding these must be provided to the System admin.
    10. Before installing any software we (system admin) are going to test in "protocol machine", only then it will be installed in HPC Master.
    11. Presently for convenience of all users, all jobs have to submitted via the Master node of HPC only, no submission of interactive jobs is allowed (which means, do not submit jobs directly from any of the other nodes of HPCF).