Menu

GPU Supercomputing Systems

What hardware is required for use with Series 16-GPU?

Minimum requirements for a GPU that can be used with Series 16-GPU include an NVIDIA GPU with a CUDA compute capability of 2.0 or higher, and at least 4GB of GPU Ram. While minimum requirements will allow the software to function, recommended GPU cards include the following: NVIDIA GeForce GTX Titan, NVIDIA Tesla® K20 or better with a CUDA compute capability of 3.0 or higher and at least 5GB of GPU Ram, as shown in Table 2.

While the GPU itself is critical for performance, other considerations are also important. To support GPU hardware, workstations need adequate power and cooling. While the GPUs listed in Table 2 each have their own active cooling system, the workstation itself needs to have enough fans to move air through the case. For power, each GPU requires approximately 250 Watts in addition to the draw from the CPU, motherboard, RAM, hard drives, etc. A recommended dual-GPU system would need at least a 1000 Watt power supply.

For performance considerations, it is also important to realize that while the GPU is used to accelerate calculations, not all calculations are executed on the GPU; the CPU still plays an important role. A good CPU, motherboard and RAM configuration is still necessary to obtain fast running simulations. In particular, the PCI Express bus is an important component of the motherboard for obtaining good GPU acceleration. This interface is responsible for transferring data between the main memory and GPU memory. Large amounts of information travel this bus, making it paramount that the supporting motherboard has a PCI Express bus that can run in 16 channel mode for as many GPUs as possible. If a GPU does not have sufficient bandwidth for transfers, then acceleration obtained from the GPU will be counter-acted by slow data transfer rates between the main board and GPU.

Can more than one GPU be used on the same computer?

Yes, multiple GPUs can be installed in the same workstation. Currently, larger workstations can easily accommodate two GPUs. Installing more than 2 GPUs is not always feasible due to supporting hardware limitations; specifically, the number of GPUs supported is dependent upon the number or type of PCI Express slots on a given motherboard. Each GPU must have at least 8 channels of a PCI Express connection available for data transfer; otherwise, data transfer could easily require more time than the calculations themselves.  However, customized turnkey systems (see below) with up to 4 GPUs are possible, and exceptional performance has been obtained.

Each Series 16-GPU calculation will use only one GPU card. However, multiple calculations can be run simultaneously on the same, or different, GPU cards. It is strongly recommended that when running multiple simulations concurrently, that each one is run on a distinct GPU. Internal observations indicate there is only a minor slowdown, of approximately 15% of runtime, when running two calculations on two separate GPU cards on the same workstation. However, when running two calculations on the same GPU, both take about 60% longer to complete. Running three simultaneous calculations, on the same GPU, results in about a 100% runtime penalty for each. Thus, to fully utilize GPU acceleration, it is recommended that each calculation run on its own GPU, whenever feasible.

Licenses of Series 16-GPU use the Reprise License Manager (RLM), as do other Barracuda or Barracuda VR products. Thus, licenses are available to be shared between one or more workstations, on the same network at the same sitevii. For customers with two or fewer licenses, a single workstation computer with one or two GPUs should be sufficient for computing requirements. Customers with 3 or more licenses of Series 16-GPU, should strongly consider a separate workstation supporting every two licenses, or a custom turnkey system.

Supercomputin'Are turnkey systems available?

Yes. CPFD Software can provide turnkey workstations containing up to four GPUsviii. GPU workstations come with your licensed software installed and tested prior to shipping. For many customers, this eliminates the need to purchase, install and configure new GPU hardware and software. Turnkey systems start at under $7,500. Contact info@cpfd-software.com for more information.

Is a turnkey system required?

No, a turnkey system is not required. Customers can buy and install GPUs themselves. However, before installing a new GPU in existing hardware, it is important to ensure the supporting workstation’s viability. Important factors to consider include:

  • The power supply unit must have an additional 250 Watts available to power the GPU (as well as appropriate power leads).
  • The workstation must have adequate cooling. Typically GPUs themselves have active cooling, but the air flow through the case must also be sufficient to reach the GPUs.
  • The PCI Express interface must be sufficient for high bandwidth transfers between the main memory and each GPU. Each installed GPU must have access to at least 8 channels of 16 possible with a PCI Express interface.

If you purchase your own system for use with Series 16-GPU, we strongly encourage you to contact our support staff in advance to help you through the process; install guides are available for CPFD Software’s Series 16-GPU products as well as for NVIDIA drivers.


vi Note that the NVIDIA CUDA compute platform is enabled on GeForce, Quadro and Tesla products. Whereas GeForce and Quadro are designed for consumer graphics and professional visualization, respectively, the Tesla product family is designed ground-up for parallel computing and programming and offers exclusive High performance computing features (see http://www.nvidia.com/object/why-choose-tesla.html, accessed September 19, 2013). CPFD Software had not independently evaluated the features or reliability of NVIDIA products.
vii Subject to license terms.
viii Subject to availability. Turnkey computer systems are not available in all markets.