High Performance Computing Facility
WELCOME TO UPOE HPC CLUSTER
The University High Performance Computing Facility at Hall No.7, in the School of Information Technology, was funded from the UGC UPOE program, and envisaged as an important tool to enable us to raise the level of our research to remain academically competitive, especially for research problems which involved large data sets, and numerical calculations. The facility has recently been upgraded with a 256-processor Sun cluster.
The HPCF Centre is conceived of
as a functionally distributed supercomputing environment, housing
leading-edge computing systems, with sophisticated software packages,
and connected by a powerful high-speed fibre-optic network. The
computing facilities are connected to the campus LAN, WLAN and also to
UPOE HPC Cluster has been installed and maintained by C-DAC. Its peak performance is 1.3 Teraflops and the technology which is used to build this cluster is ROCKS version 5.2 and the scheduler used is Sun Grid Engine, by using this scheduler we can manage the user applications and we can implement the policies, it is a very powerful tool. In order to provide the low latency we have used the separate switches for MPI, Storage and IPMI. In this UPOE cluster we have attached the storage model: storagetek 5220And the total avail storage is 4TB.
Dr. Supratim Sengupta
Dr. Narinder Singh Sahni
Prof. Ramakrishna Ramaswamy
- Processor : AMD OPETRON 2218 DUAL CORE DUAL SOCKET
- NO. of Master Nodes : 1
- NO. of Computing Nodes : 64
- Operating System : CENT OS 5.3
- CLUSTER Software : ROCKS version 5.2
- SERVER Model : SUNFIRE X4200 (1 NO)
- Compute Node Model : SUNFIRE X2200 (64 NO)
- NAS Appliance Model : Storage tek 5220
- Total Peak Performance : 1.3 T. F
- No of nodes 64
- Memory RAM 4 GB
- Hard Disk Capacity/each node : 250GB
- Storage Cap. 4 TB
- No .of processors and cores: 2 X 2 = 4(dual core + dual socket)
- CPU speed : 2.6 GHz
- No. of floating point operations per seconds for AMD processor: 2 (since it is a dual core)
- Total peak performance : No of nodes X No .of processors and cores X Cpu speed X No of floating point operations per second = 64 X 4 X 2.6GHz X 2 = 1.33 TF
- Ganglia : monitoring tool
- PVM : parallel virtual machine
- HPC software : High performance LINPAC (performance testing tool )
- Software's used in HPC cluster
- R, Amber + Q.C tools,CID in RNA, GRID, GOLPE, ALMOND, MOKA, VOLSURF, METASITE, HMMER, INFERNAL, BLAST, MATLAB, GNUPLOT, TEIRESIAS, OPENEYE, ADF, AUTODOCK, GROMACS etc.,
- Sun Grid Engine: Job scheduler software tool, in this we have already implemented the fair share policy i.e. all users can get equal priority, using this we can submit batch and parallel jobs
- Open MPI Lam MPI
- C, C++, FORTRAN compilers (both GNU AND INTEL)
- Bio roll: for Bio-Chemical applications
USER RULES for HPCF, SIT, JNU :
- Every user who has account in HPCF can use the cluster 24 X 7 (for either serial or parallel jobs).
- The maximum number of jobs one can submit at a time is 10. If someone needs to submit jobs more than 10, he/she should contact the HPCF staff to get permission. Permission may be granted depending on the usage of HPCF at that time.
- The time limit of the jobs is, at present, unlimited.
- Interactive jobs will be allowed only for compilation and testing of new software. For the details contact the HPCF staff.
- To get a new account, electronic form should be downloaded from the HPCF web site (http://172.16.5.101/HPCF/) and the hard copy of the same should be submitted to the HPCF staff.
- At a time a job of maximum memory usage
4 GB can be accommodated. Please restructure your jobs keeping in mind
that the RAM capacity of our cluster is 4GB per node.
- No back up of data is done in the cluster. The user is
backing up her own data.
- Presently according to HPCF rule, the scheduler runs using
in First out) policy.
- For installation of new programs, if one submits their
proper instructions regarding installation etc. system admin will
install the selected program. If users want to install any new software
themselves, they must contact the HPCF lab and they will be provided
help to install their software. All the SW must have either open
license or procured license, the information regarding these must be
provided to the System admin.
- Before installing any software we (system admin) are going
to test in "protocol machine", only then it will be installed in
- Presently for convenience of all users, all jobs have to submitted via the Master node of HPC only, no submission of interactive jobs is allowed (which means, do not submit jobs directly from any of the other nodes of HPCF).