NOTICE: The Processors Wiki will End-of-Life on January 15, 2021. It is recommended to download any files or other content you may need that are hosted on processors.wiki.ti.com. The site is now set to read only.
MCSDK HPC 3.0.0 Beta Getting Started Guide
HPC (High Performance Computing) Development Tools for MCSDK
Version 3.0.0 Beta
Getting Started Guide
Last updated: 07/02/2014
Contents
- 1 Introduction
- 2 Hardware Requirements
- 3 Software Installation on Ubuntu Desktop
- 4 EVM Setup
- 5 Software Installation on File System of the EVM
- 6 Running Out of box Sample Applications
- 7 Recompiling MCSDK-HPC and Sample Applications
- 8 Write Customer's Application on top of MCSDK-HPC
- 9 Useful Resources and Links
- 10 Troubleshooting
Introduction[edit]
The Multicore Software Development Kit (MCSDK) provides foundational software for TI KeyStone II platforms, by encapsulating a collection of software elements and tools for both the A15 and the DSP. MCSDK-HPC (High Performance Computing), built as an add-on on top of the foundational MCSDK, provides HPC specific software modules and algorithm libraries along with several out of box sample applications. As highlighted in the picture below, SDKs together provides complete development environment [A15 + DSP] to offload HPC applications to TI C66x multi-core DSPs.
Listed below are the key components provided by MCSDK-HPC and a brief description about them:
Category | Details |
OpenCL | OpenCL (Open Computing Language) is a multi-vendor open standard for general-purpose parallel programming of heterogeneous systems that include CPUs, DSPs and other processors. OpenCL is used to dispatch tasks from A15 to DSP cores |
OpenMP on DSP | OpenMP is the de facto industry standard for shared memory parallel programming. Use OpenMP to achieve parallelism across DSP cores. |
OpenMPI | Run on A15 cluster and use OpenMPI to allow multiple K2H nodes to communicate and collaborate. |
Specifically, this Getting Started Guide for MCSDK-HPC provides information needed for running out of box MCSDK-HPC sample applications, recompiling MCSDK-HPC, and developing customer's HPC application leveraging MCSDK-HPC. By the end of this Getting Started Guide the user should have:
- Installed pre-requisite software for MCSDK-HPC
- Installed MCSDK-HPC along with the pre-built sample applications
- Run the out-of-box MCSDK-HPC sample applications on TI KeyStone II devices
- Recompiled MCSDK-HPC if needed
- Obtained instructions on how to develop customer's HPC application leveraging MCSDK-HPC
Acronyms[edit]
The following acronyms are used throughout this wiki page.
Acronym | Meaning |
BLAS | Basic Linear Algebra Software |
BOC | Breakout Cards |
CCS | Texas Instruments Code Composer Studio |
DSP | Digital Signal Processor |
EVM | Evaluation Module, hardware platform containing the Texas Instruments DSP |
FFT | Fast Fourier Transform |
HPC | High Performance Computing |
IPC | Inter-Processor Communication |
MCSDK | Texas Instruments Multi-Core Software Development Kit |
OpenCL | Open Computing Language |
OpenMP | Open Multi-Processing |
OpenMPI | Open Source Message Passing Interface |
TI | Texas Instruments |
.ipk | Debian software installation package |
Supported Devices/Platforms[edit]
This release supports the following Texas Instruments devices/platforms:
Platform | Supported Devices | Supported EVM |
[K2H] | TCI6636K2H | EVMK2H |
Hardware Requirements[edit]
As shown in the picture below, running HPC demos requires the below physical components
Physical Component | Details |
K2H EVM | One EVM is needed to run OpenCL, OpenMP sample applications. Two EVMs are needed for demonstrating OpenMPI cluster. |
Gigabit Ethernet Switch | To interconnect the components (EVMs with NFS, TFTP) and to demonstrate OpenMPI over Ethernet. |
PC running Ubuntu 12.04 LTS with NFS, TFTP | Ubuntu 12.04 LTS is available for free at Ubuntu 12.04 LTS. The Ubuntu PC is also used as the TFTP server and NFS server. |
PC for serial port connection (Can re-use Ubuntu PC) | To connect to K2H EVM Serial Port |
Hyperlink Break Out Card (Optional) | In order to run OpenMPI-Hyperlink demos between two K2H EVMs, RTK breakout cards (BOC) and a Hyperlink cable are needed. For each K2H EVM, connect K2H EVM’s Zone 3 ADF CONN (30 pin) into RTM breakout card’s Micro TCA 4 Plug. Then use the Hyperlink cable to connect the breakout cards’ Hyperlink IPASS HD connectors. |
8 GB DIMM (Optional) | The EVMs come with 2GB DIMM. If the application needs larger memory (shared memory between A15 and C66x), users can upgrade to an 8 GB DIMM. We support all JEDEC-compliant DDR3 Unbuffered SODIMMs up to 8GB, both single and double rank. We do not support registered SODIMMs. The 8GB memory we used in our testing is MT18KSF1G72HZ-1G6E2ZE. Timing and configuration parameters will need to be adjusted for the specific SODIMM chosen |
Software Installation on Ubuntu Desktop[edit]
Follow the instructions below to install MCSDK, MCSDK-HPC packages on the Ubuntu PC. Also install TFTP server and NFS server on the same PC.
Install MCSDK[edit]
Please visit MCSDK HPC download page to look for the download link of MCSDK to be used with the HPC release. Click the MCSDK download link to open the MCSDK product download page, and then download MCSDK installer for native Linux (mcsdk_native<version>_native_setuplinux.bin), please change the attribute of the installer to executable and run the installer as shown below.
Please note if your Ubuntu OS is 64 bit and you have not run a 32 bit binary before, then run this command first: <syntaxhighlight lang="bash"> > sudo apt-get install ia32-libs </syntaxhighlight>
<syntaxhighlight lang="bash"> > chmod +x mcsdk_native<version>_native_setuplinux.bin > ./mcsdk_native<version>_native_setuplinux.bin </syntaxhighlight>
Install MCSDK-HPC[edit]
Download the HPC Linux installer (mcsdk-hpc_<version>_setuplinux.bin) to the Ubuntu PC, change the attribute of the installer to executable and then run the installer as shown below.
<syntaxhighlight lang="bash"> > chmod +x mcsdk-hpc_<version>_setuplinux.bin > ./mcsdk-hpc_<version>_setuplinux.bin </syntaxhighlight>
Install TFTP server[edit]
First, install TFTP related packages:
<syntaxhighlight lang="bash"> sudo apt-get install xinetd tftpd tftp </syntaxhighlight>
Then, create /etc/xinetd.d/tftp and put the following entry in the file
<syntaxhighlight lang="bash"> service tftp { protocol = udp port = 69 socket_type = dgram wait = yes user = nobody server = /usr/sbin/in.tftpd server_args = /tftpboot disable = no } </syntaxhighlight>
After that, create a folder /tftpboot in your root folder. This should match whatever you entered in server_args in /etc/xinetd.d/tftp. This folder is where a TFTP client will get files from. The TFTP client will not have access to any other folder. Then, Change the mode of the folder, and the ownership. <syntaxhighlight lang="bash"> $ sudo mkdir /tftpboot $ sudo chmod -R 777 /tftpboot $ sudo chown -R nobody /tftpboot </syntaxhighlight>
Last, restart the xinetd service
<syntaxhighlight lang="bash">
$ sudo /etc/init.d/xinetd restart or
$ sudo service xinetd restart
</syntaxhighlight>
Install NFS Server[edit]
First, Install NFS server related package: <syntaxhighlight lang="bash"> $ sudo apt-get install nfs-kernel-server </syntaxhighlight>
Then, restart the nfs-kernel-server as follows: <syntaxhighlight lang="bash"> $ sudo service nfs-kernel-server restart </syntaxhighlight>
If there is firewall, follow the steps below to install NFS server. Bypass these steps if there is no firewall
First, Install NFS server related package: <syntaxhighlight lang="bash"> $ sudo apt-get install nfs-kernel-server </syntaxhighlight>
Then, check if you have a firewall installed.
<syntaxhighlight lang="bash"> $ sudo ufw status </syntaxhighlight>
If the firewall is not active, enable the firewall.
<syntaxhighlight lang="bash"> $ sudo ufw enable </syntaxhighlight>
After that, bind the NFS service (mountd) to a static port number. Open /etc/default/nfs-kernel-server file, comment out the following line,
<syntaxhighlight lang="bash"> RPCMOUNTDOPTS=--manage-gids </syntaxhighlight>
and then add a new line:
<syntaxhighlight lang="bash"> RPCMOUNTDOPTS="-p 13100" </syntaxhighlight>
Note that 13100 is a port number that is selected randomly. Most likely, this is not used by your Ubuntu PC for any other services, but it is best to double check. To do this, please check the contents of the file /etc/services. If you see the number 13100 listed in this file, please change the line in /etc/default/nfs-kernel-server to something else that is not used.
Restart the nfs-kernel-server as follows: <syntaxhighlight lang="bash"> $ service nfs-kernel-server restart </syntaxhighlight>
Last, add an exception to the firewall to allow incoming requests on port 13100 (or whatever port number you set in the steps above). Please replace the variable <IP address of Ubuntu PC> in the command below with your PC's IP:
<syntaxhighlight lang="bash">
$ ufw allow from <IP address of Ubuntu PC>/24 to any port 111
$ ufw allow from <IP address of Ubuntu PC>/24 to any port 2049
$ ufw allow from <IP address of Ubuntu PC>/24 to any port 13100
$ ufw allow from < IP address of Ubuntu PC>/24 to any port 69
</syntaxhighlight>
The last "allow" is for the TFTP connection
EVM Setup[edit]
Basic Setup of the EVM[edit]
Follow instructions at EVMK2H Hardware Setup to get the basic setup of the K2H EVM, including
- FTDI Driver Installation on PC Host
- BMC Version Check and Update (make sure BMC is up to date)
- UCD Power Management Modules In-Field Update (make sure UCD is up to date)
- Attach the Ethernet cable to ENET0 port
- Set the boot mode switch SW1 as SPI Little Endian Boot mode)
- Attach the serial port cable to the SoC UART port
- Connect the power cable
Install Terminal Software on PC for Serial Port Connection[edit]
Install PuTTY on the PC for connecting to the serial port of the EVM.
If a Ubuntu Desktop is used for serial port connection, follow the steps below to establish the serial connection to the board.
1. Connect your Ubuntu PC to the mini-USB on the main board (not the daughter card). Do not power up the board yet.
2. By typing dmesg | grep tty on a terminal on the host, it shows a list of the available serial connections. ttyUSB* (FTDI USB) should be the serial ports for the EVM. The lower number is the SOC (System on Chip) UART (/dev/ttyUSB0), while the higher number is the BMC (Boot Monitor Controller) UART (/dev/ttyUSB1).
3. Install picocom or other terminal emulator (such as putty, minicom) if it has not been installed. <syntaxhighlight lang="bash"> sudo apt-get install picocom </syntaxhighlight> Then, establish a serial connection to the USB0 port. <syntaxhighlight lang="bash"> picocom -b 115200 /dev/ttyUSB0 </syntaxhighlight>
4. Power up the EVM. The terminal will start displaying the Uboot text. It should continuously display "BOOTP BROADCAST #". Press crtl+C to get out of this loop, and this will come to the # prompt, which is now in Uboot.
Bring up Linux on EVM using an NFS filesystem[edit]
Follow the instructions below to bring up Linux on EVMK2H using an NFS filesystem.
1. Program or Upgrade Uboot Image
Before booting up Linux, program or update the U-Boot image to the latest version provided in the MCSDK release following instructions at MCSDK Programming SPI NOR flash with U-Boot GPH image. The Uboot image "u-boot-spi-keystone-evm.gph" can be found from the directory of [mcsdk_install_dir]/mcsdk_linux_3_00_0x_xx/images.
2. Copy Required Linux Kernel images
On the Ubuntu Desktop, copy the following images from [mcsdk_install_dir]/mcsdk_linux_3_00_0x_xx/images to the TFTP server directory <syntaxhighlight lang="bash"> skern-keystone-evm.bin (This is the boot monitor) uImage-k2hk-evm.dtb (This is the device tree file) uImage-keystone-evm.bin (This is the Linux kernel) </syntaxhighlight>
3. Unpack TI SDK Root File System
On the Ubuntu Desktop, create a directory which will contain the TI SDK Linux root file system, e.g., /evmk2h_nfs. Within that directory unpack the tisdk-rootfs.tar.gz from [mcsdk_install_dir]/mcsdk_linux_3_00_0x_xx/images folder: <syntaxhighlight lang="bash">sudo tar xvf tisdk-rootfs.tar.gz -C /evmk2h_nfs (path to folder you just created) </syntaxhighlight> Be careful of where you unpack this root file system. Make sure that you do not unpack this file system in your root folder on the Ubuntu Desktop.
4. Edit NFS configuration for the Ubuntu Desktop
Edit the NFS configuration file /etc/exports to include the path to the network file system. Add the following line to the end of the file. Please replace '/evmk2h_nfs' in the line below with the directory path where you have unpacked the TI embedded device's filesystem:
<syntaxhighlight lang="bash"> /evmk2h_nfs *(rw,subtree_check,no_root_squash,no_all_squash,sync) </syntaxhighlight>
After this change, use the “sudo service nfs-kernel-server restart” to restart the NFS server.
5. Set the Environment variables in Uboot of the EVM
Through Terminal, after stopping the autoboot process of the EVM, modify u-boot of target EVMs to set the environment variables. <syntaxhighlight lang="bash"> env default -f -a setenv boot net setenv mem_reserve 1536M [A larger size can be used when using more than 2GB DIMM] setenv gatewayip 128.247.102.1 [This is the gateway IP of the subnet on which the host PC and the board are present] setenv serverip xxx.xxx.xxx.xxx [This is the IP of the host linux machine] setenv tftp_root /tftpboot [Path to the tftp server on your host machine] setenv name_fdt uImage-k2hk-evm.dtb [Optional as default definition matches it already] setenv name_kern uImage-keystone-evm.bin [Optional as default definition matches it already] setenv nfs_root /evmk2h_nfs [Path to the NFS on your host machine] setenv nfs_serverip [ip address of nfs root, set this if the nfs_root is on a different computer than the tftp server] saveenv (This saves the envirnoment variables to the flash) </syntaxhighlight>
6. Power cycle the EVM
Power cycle the board so that the new version of Uboot loads. You can stop the autoboot process to check your environment variables or the version of Uboot by tying version on the Uboot command prompt if you’d like. If you have stopped the auto boot process you can continue the booting process by typing in the boot into the command prompt.
<syntaxhighlight lang="bash">boot </syntaxhighlight>
Establish two K2H EVM nodes for OpenMPI Demos[edit]
The steps below establish two K2H EVM nodes on a trust-worthy network so that they can communicate securely in OpenMPI applications. Bypass these steps for non-OpenMPI demos.
1. Modify hostname in u-boot to use k2hnode1 on K2H EVM1 and k2hnode2 on K2H EVM2, and then reboot the EVMs, e.g.,
<syntaxhighlight lang="bash"> setenv hostname k2hnode1 </syntaxhighlight>
After the EVMs boot up, execute the steps below on the EVM from the command line.
2. Find IP addresses of the K2H EVM nodes during bootup or use ifconfig to get the IP addresses after bootup. Then, edit /etc/hosts to include correct hostname and IP address.
<syntaxhighlight lang="bash"> 127.0.0.1 localhost.localdomain localhost [ip address of K2HEVM node1] k2hnode1 [ip address of K2HEVM node2] k2hnode2 </syntaxhighlight>
3. Add mpiuser (user: mpiuser, password: gguser502).
<syntaxhighlight lang="bash"> adduser mpiuser </syntaxhighlight>
An alternative is to use bash shell as default.
<syntaxhighlight lang="bash"> adduser mpiuser -s /bin/bash </syntaxhighlight>
If both K2H EVM nodes are sharing a common filesystem, the above steps 2 and 3 can be executed on one of the K2H EVM nodes. Otherwise, do steps 3 and 4 on each of the two K2H EVM nodes.
4. Do SSH between the two nodes.
Reboot both K2H EVM nodes, log in as mpiuser on both setups and then do SSH.
<syntaxhighlight lang="bash"> [k2hnode1] ssh mpiuser@k2hnode2, accept [w/ yes] then exit [k2hnode2] ssh mpiuser@k2hnode1, accept [w/ yes] then exit </syntaxhighlight>
After this step, file ~/.ssh/known_hosts (/home/mpiuser/.ssh/known_hosts) is properly set with information about the other node.
5. Set the password for root (user: root, password gguser502) and then do SSH.
<syntaxhighlight lang="bash"> root@k2hnode1# passwd [k2hnode1] ssh root@k2hnode2, accept [w/ yes] then exit [k2hnode2] ssh root@k2hnode1, accept [w/ yes] then exit </syntaxhighlight>
After this step, file ~/.ssh/known_hosts is updated with the root as user.
Software Installation on File System of the EVM[edit]
After setting up the file system of the EVM, install the following packages on the file system of the EVM.
Install TI CGTools for ARM on file system[edit]
On the Ubuntu machine,download the latest TI CGTools 7.6.x for ARM from TI CGTools 7.6.x Download.
Then, run the commands below to install TI CGTools for arm on the file system of the EVM.
<syntaxhighlight lang="bash">
> cd /evmk2h_nfs
> mkdir -p ./opt/ti
> cp <path_to_installer>/ti_cgt_C6000_<VER>_armlinuxa8hf_installer.sh ./opt/ti
> cd ./opt/ti
> chmod +x ti_cgt_C6000_<VER>_armlinuxa8hf_installer.sh
> ./ti_cgt_C6000_<VER>_armlinuxa8hf_installer.sh
</syntaxhighlight>
Install HPC IPKs to update EVM filesystem with SDK binaries[edit]
The HPC release includes pre-compiled binaries which are packaged in *.ipk (Debian software installation package format). These binaries remove the requirement to build HPC from source. They can be installed natively on the target EVMs with the following steps.
- Copy IPK files to target filesystem
The IPK files are located under [mcsdk install dir]/mcsdk_hpc_<version>/images directory.
- Log on to the EVM, and install the IPKs using the 'opkg install' command, e.g.,
<syntaxhighlight lang="bash"> > opkg install libgomp1_<ver>-arago3_cortexa15hf-vfp-neon.ipk > opkg install binutils_<ver>_cortexa15hf-vfp-neon.ipk > opkg install opencl_<ver>_cortexa15hf-vfp-neon.ipk > opkg install ti-openmpi_<ver>_cortexa15hf-vfp-neon.ipk > opkg install libbz2-0_<ver>_armv7ahf-vfp-neon.ipk > opkg install elfutils_<ver>_armv7ahf-vfp-neon.ipk > opkg install openmpacc_<ver>_cortexa15hf-vfp-neon.ipk > opkg install mcsdk-hpc-cfg_<ver>_cortexa15hf-vfp-neon.ipk --force-overwrite > opkg install dropbear-mcsdk-hpc_<ver>_cortexa15hf-vfp-neon.ipk --force-overwrite </syntaxhighlight>
In the above, the '--force-overwrite' option is required for the mcsdk-hpc-cfg package because it will patch '/etc/init.d/hostname.sh' so that the hostname is obtained from the Uboot environment variable. The dropbear-mcsdk-hpc package is required for passwordless SSH connections between EVMs which share a common root filesystem. Passwordless SSH connections are required for MPI over ethernet.
Installing mcsdk-hpc-cfg_<VER>_armhf.ipk will create /etc/profile.d/mcsdk-hpc-profile.sh in the file system of the EVM (e.g., /evmk2h_nfs). In this file, modify TI_OCL_CGT_INSTALL as needed to point to the full installation path of TI CGtools (ARM version) on the file system of the EVM.
Install HPC Sample Applications[edit]
The HPC pre-compiled demos are provided in a tar file at [mcsdk install dir]/mcsdk-hpc_<version>/images directory. Extract it to some directory within the user's home directory on the file system of the EVM. <syntaxhighlight lang="bash"> > tar -xzvf demos.tar.gz </syntaxhighlight>
Running Out of box Sample Applications[edit]
MCSDK-HPC provides multiple categories of demo applications to demonstrate OpenCL, OpenMPI, OpenMP, OpenMP accelerator Model run times. Please follow README under each demo folder to find instructions for running the demos. Reboot the EVMs and log in as root to run the demos.
Please refer to User Guides below for more information about the three categories of HPC demo applications.
Recompiling MCSDK-HPC and Sample Applications[edit]
In the current release of HPC, software is cross-compiled, i.e. compile on a Ubuntu Desktop to generate A15 binaries. These binaries are then copied to the file system mounted on to be executed on K2H EVMs.
Installing Software for Recompiling/Debugging HPC[edit]
In order to rebuild and/or debug HPC binaries and demos, install the following software in addition on the Ubuntu Desktop.
- 1. Install Linaro Tool Chain
- Download and unzip Linaro tools from (https://launchpad.net/linaro-toolchain-binaries/trunk/2013.03/+download/gcc-linaro-arm-linux-gnueabihf-4.7-2013.03-20130313_linux.tar.bz2).
- 2. Install TI CGTools 7.6.x
- Download the latest TI CGTools 7.6.x for Desktop Linux from (https://www-a.ti.com/downloads/sds_support/TICodegenerationTools/beta.htm).
- Install Desktop Linux CGTools on development machine to /opt/ti directory of the Desktop
<syntaxhighlight lang="bash">
> chmod +x ti_cgt_C6000_<VER>_linux_installer_x86.bin
> ./ti_cgt_C6000_<VER>_linux_installer_x86.bin
</syntaxhighlight>
Steps to Recompile MCSDK-HPC[edit]
Follow the steps below to cross-compile HPC on a Ubuntu Desktop.
1. Install QEMU for Desktop emulation of ARM executable (MPICC). opal_wrapper is only dispatching work to linaro x86 ARM cross-compiler.
<syntaxhighlight lang="bash">sudo apt-get install qemu qemu-user-static binfmt-support mesa-common-dev binutils-dev flex u-boot-tools</syntaxhighlight>
2. Modify variables in setup_hpc_env.sh as needed to point to install location of MCSDK 3.0.x, MCSDK-HPC, TI CGTools, and etc on the Desktop Linux PC. <syntaxhighlight lang="bash"> TI_INSTALL_DIR=~/ti:/opt/ti LINARO_INSTALL_PATH=~/linaro:~/ti/linaro TI_CGT_INSTALL_PATH=~/ti:/opt/ti MCSDK_INSTALL_PATH=~/ti MCSDK_HPC_INSTALL_PATH=~/ti
...
TI_SEARCH_PATH=~/ti:/opt:/opt/ti:/usr/src:/usr/src/dsp </syntaxhighlight>
If needed, modify the following line in MCSDK-HPC-BOM.txt to point to the installation folder name of TI CGTools. Please note that the version number may need to be changed also if a later version of TI CGTools 7.6.x is downloaded and installed. <syntaxhighlight lang="bash"> all:C6X_GEN:c6000_7.6.0B1 </syntaxhighlight>
Also modify TI_OCL_CGT_INSTALL as needed in <mcsdk-hpc dir>/patches/targetfs/mcsdk-hpc-profile.sh to point to the full installation path of TI CGtools on the Desktop Linux PC.
By default, the target root directory is set as /evmk2h_nfs in setup_hpc_env.sh. Modify it if a different target root directory is used. <syntaxhighlight lang="bash"> TARGET_ROOTDIR=/evmk2h_nfs </syntaxhighlight>
3. Set environment variables using script in top folder.
<syntaxhighlight lang="bash"> source setup_hpc_env.sh </syntaxhighlight> Running the above command will check if all the required components can be located. It reports error when there are missing components. Ensure all the components are installed and the installation directory have been correctly set in setup_hpc_env.sh.
4. Install linux devkit.
<syntaxhighlight lang="bash"> ./install_devkit.sh </syntaxhighlight>
5. Configure openmpi, cmem, and dropbear.
<syntaxhighlight lang="bash"> make config </syntaxhighlight>
6. Compile sdk, openmpi, and dropbear.
<syntaxhighlight lang="bash"> make </syntaxhighlight>
7. Install openmpi, mpm-transport, opencl, and dropbear.
<syntaxhighlight lang="bash"> sudo make install </syntaxhighlight>
8. Configure demos
<syntaxhighlight lang="bash"> make config_demos </syntaxhighlight>
9. Compile demos
<syntaxhighlight lang="bash"> make demos </syntaxhighlight>
10. Install demos to a specified INSTALL_DIR, e.g.,
<syntaxhighlight lang="bash"> sudo make install_demos INSTALL_DIR=/home/mpiuser </syntaxhighlight>
Write Customer's Application on top of MCSDK-HPC[edit]
The out of box demos provided by MCSDK-HPC can facilitate users to write their own HPC applications. Please refer to the links below for code walk through for OpenCL, OpenMP, and OpenMPI applications.
Useful Resources and Links[edit]
Product Download and Updates[edit]
For product download and updates, please visit the links listed in the table below.
Product Download Link | |
MCSDK HPC Download | http://software-dl.ti.com/sdoemb/sdoemb_public_sw/mcsdk_hpc/latest/index_FDS.html |
MCSDK Download | http://software-dl.ti.com/sdoemb/sdoemb_public_sw/mcsdk/3_00_04_17/index_FDS.html |
TI CGTools Download | https://www-a.ti.com/downloads/sds_support/TICodegenerationTools/beta.htm |
Technical Support[edit]
For technical discussions and issues, please visit the links listed in the table below.
Forum/Wiki Link | |
MCSDK HPC forum | http://e2e.ti.com/support/applications/high-performance-computing/f/952.aspx |
C66x Multicore forum | http://e2e.ti.com/support/dsp/c6000_multi-core_dsps/f/639.aspx |
TI-RTOS forum | http://e2e.ti.com/support/embedded/f/355.aspx |
Code Composer Studio forum | http://e2e.ti.com/support/development_tools/code_composer_studio/f/81/t/3131.aspx |
TI C/C++ Compiler forum | http://e2e.ti.com/support/development_tools/compiler/f/343/t/34317.aspx |
Embedded Processors wiki | http://processors.wiki.ti.com |
Note: When asking for help in the forum you should tag your posts in the Subject with “MCSDK HPC”, the part number (e.g. “TCI6636K2H”) and additionally the component (e.g. “FFT”).
Troubleshooting[edit]
Listed below are some Frequently Asked Questions. Please click on the "Expand" link adjacent to any question to see the answer.
- Can the IPK's be installed from the X86 development machine?
Yes. However, it is recommended to install them natively on the EVM since the opkg utility can maintain package versions and the a list of files installed by each package.
To install the IPKs from the Ubuntu development machine, the dpkg command may be used as follows:
<syntaxhighlight>dpkg -x <ipk_file> <evm_filesystem_root> </syntaxhighlight>
For example:
<syntaxhighlight>dpkg -x ti-openmpi_1.0.0.6_cortexa15hf-vfp-neon.ipk /evmk2h_nfs </syntaxhighlight>
- New Question Template
For more questions and answers, please visit MCSDK-HPC Trouble Shooting Wiki.