NOTICE: The Processors Wiki will End-of-Life on January 15, 2021. It is recommended to download any files or other content you may need that are hosted on processors.wiki.ti.com. The site is now set to read only.

HPC

From Texas Instruments Wiki
Jump to: navigation, search

Overview[edit]

This wiki is one in a series showing how to use c66x coCPU cards in commodity servers to achieve real-time, high capacity processing and analytics of multiple concurrent streams of media, signals and other data.

This wiki is the cloud HPC and server HPC overview, and contains information about server type and configuration, overall software architecture, and virtualization support. The following wikis go into application specific detail:

HPC with c66x CPUs[edit]

Using established technology and software stacks built by TI's third-party ecosystem, it's now possible to combine TI and Intel cores to create heterogeneous HPC server solutions. Using off-the-shelf servers running Linux + KVM, up to 10s of x86 cores and 100s of c66x cores can work together to solve applications including high performance VMs (HPVMs), image analytics, video content delivery, and media transcoding.

Underlying Technology[edit]

Following is a list of TI and third-party items required:

  1. c66x CPUs and build tools, TI
  2. 32-core or 64-core c66x coCPU cards, Advantech
  3. Standard off-the-shelf server running Ubuntu, CentOS, or Red Hat Linux (tested configurations given below)
  4. DirectCore host drivers and libraries, Signalogic
  5. DirectCore guest drivers and patches for QEMU, libvirt, and virt-manager, Signalogic
  6. Application Demo Programs, Signalogic

c66x CPUs and Build Tools[edit]

The c66x architecture is an advanced CPU architecture, similar in many ways to Intel x86, including external memory, internal memory subsystem (L1P, L1D, L2 cache, multicore shared memory), embedded PCIe and high-speed NIC peripherals, and inter-CPU communication. In addition, from its DSP heritage, the c66x architecture retains compute-oriented advantages, including VLIW, software pipelined loops, advanced DMA functionality, and multiple operations per clock cycle. Because it's a CPU it works well in combination with x86 CPUs inside servers.

TI build tools generate optimized code from C/C++ source. Porting open source C/C++ to c66x is a straightforward process, one documented example being OpenCV (open computer vision). The command line version of build tools are available online.

Note that Code Composer Studio software and detailed knowledge of low-level TI chip details are not required. The command line tools and standard Makefiles are used in all demo software described in the HPC series of wiki's.

coCPU Cards[edit]

The Advantech coCPU cards supply the server horsepower. Each card has 64 cores, takes up a single slot (unlike GPU boards that take 2 slots), has two (2) 1 GbE NICs, and draws about 120W. Up to 256 cores can be installed in a standard 1U server, and twice that many in suitable 1U or 2U servers. This is a lot of CPU cores, and aligns perfectly with emerging server architecture trends in virtualization, DPDK, and high bandwidth network I/O, as well as multicore programming models such as OpenMP and OpenACC.

Off-the-Shelf Linux Servers[edit]

Servers and OS tested with c66x HPC solutions include:

  • Servers: HP DL380 G8 and G9, Dell R720 and R730, Supermicro 6016GT or 1028Gx series, others
  • Linux OS: Ubuntu 12.0, 14.04, CentOS 6.2, 7, 7.1, or Red Hat 7
  • KVM Hypervisor and QEMU system emulator (VMware support is planned)

Below are images showing c66x coCPU cards installed in Dell and HP servers. Unlike GPU boards, the cards are single-slot thickness, allowing full riser utilization.

Below is an image showing a Dell R720 server with 16 x86 cores and two (2) c66x coCPU cards installed, or a total of 128 c66x cores (the x86 cores are supplied by two (2) Xeon E5-2670 CPUs rated at 2.6 GHz, and the c66x cores by eight (8) C6778 CPUs rated at 1.25 GHz):

Dell R720 with 128 c66x cores installed

Below is an image showing an HP DL380 G9 server with 16 x86 cores and two (2) c66x coCPU cards installed, or a total of 128 c66x cores (the x86 cores are supplied by two (2) Xeon E5-2680v3 CPUs rated at 2.5 GHz, and the c66x cores by eight (8) C6778 CPUs rated at 1.25 GHz):

HP DL380 G9 with 128 c66x cores installed

Server Power Consumption and Temperature[edit]

For a Dell R720 with 16 Intel / 128 TI cores, the average power draw is around 700 W. The images below show R720 instantaneous temperature and power stats for 128 cores:

Note that power variation depends mostly on what the x86 cores are doing.

For an HP DL380 G9 with 16 Intel / 128 TI cores, the average power draw is around 650 W. The image below shows DL380 G9 temperature stats for 128 cores:

HP DL380 G9 iLO temperature stats with 128 c66x cores installed

A "Power and Thermal Evaluation" app note is available explaining the test methodology used for precise TI chip level measurements.

Host and Guest Drivers[edit]

DirectCore drivers interact with c66x cards from either host instances or VMs. Host instances use a "physical" driver and VM instances use virtIO "front end" drivers.

Host and Guest Libs[edit]

DirectCore libraries provide a high level API for applications. DirectCore libraries abstract all c66x cores as a unified "pool" of cores, allowing multiple users / VM instances to share c66x resources, including NICs. This applies regardless of the number of cards installed in the server. This page has DirectCore API and source code examples.

Software Model[edit]

Below is a diagram showing the software model for the cloud and server HPC solution. Notes about this diagram:

  • Application complexity increases from left to right (command line, open source library APIs, user code APIs, heterogeneous programming)
  • All application types can run concurrently in host or VM instances (see below for VM configuration)
  • c66x CPUs can make direct DMA access to host memory, facilitating use of DPDK
  • c66x CPUs are connected directly to the network. Received packets are filtered by UDP port and distributed to c66x cores at wire speed

 

HPC software model diagram

 

The host memory DMA capability is also used to share data between c66x CPUs, for example in an application such as H.265 (HEVC) encoding, where 10s of cores must work concurrently on the same data set.

Installing / Configuring VMs[edit]

Below is a screen capture showing VM configuration for c66x coCPU cards, using the Ubuntu Virtual Machine Manager (VMM) user interface:

VMM dialog showing VM configuration for c66x coCPU cards

c66x core allocation is transparent to the number of cards installed in the system; just like installing memory DIMMs of different sizes, c66x cards can be mixed and matched.

Application Demo Programs[edit]

Application test and demo programs are available and described in detail on other wiki's in the cloud and server HPC series, including:

 

E2e.jpg {{
  1. switchcategory:MultiCore=
  • For technical support on MultiCore devices, please post your questions in the C6000 MultiCore Forum
  • For questions related to the BIOS MultiCore SDK (MCSDK), please use the BIOS Forum

Please post only comments related to the article HPC here.

Keystone=
  • For technical support on MultiCore devices, please post your questions in the C6000 MultiCore Forum
  • For questions related to the BIOS MultiCore SDK (MCSDK), please use the BIOS Forum

Please post only comments related to the article HPC here.

C2000=For technical support on the C2000 please post your questions on The C2000 Forum. Please post only comments about the article HPC here. DaVinci=For technical support on DaVincoplease post your questions on The DaVinci Forum. Please post only comments about the article HPC here. MSP430=For technical support on MSP430 please post your questions on The MSP430 Forum. Please post only comments about the article HPC here. OMAP35x=For technical support on OMAP please post your questions on The OMAP Forum. Please post only comments about the article HPC here. OMAPL1=For technical support on OMAP please post your questions on The OMAP Forum. Please post only comments about the article HPC here. MAVRK=For technical support on MAVRK please post your questions on The MAVRK Toolbox Forum. Please post only comments about the article HPC here. For technical support please post your questions at http://e2e.ti.com. Please post only comments about the article HPC here.

}}

Hyperlink blue.png Links

Amplifiers & Linear
Audio
Broadband RF/IF & Digital Radio
Clocks & Timers
Data Converters

DLP & MEMS
High-Reliability
Interface
Logic
Power Management

Processors

Switches & Multiplexers
Temperature Sensors & Control ICs
Wireless Connectivity