NOTICE: The Processors Wiki will End-of-Life on January 15, 2021. It is recommended to download any files or other content you may need that are hosted on processors.wiki.ti.com. The site is now set to read only.
Template:Glsdk running examples
Contents
- 1 Running Examples
- 1.1 Running OMAP DRM DSS Examples
- 1.2 Graphics Demos from Command Line
- 1.3 Using the PowerVR Tools
- 1.4 Testing DSS WB pipeline
- 1.5 Running aplay and arecord application
- 1.6 Running GC320 application
- 1.7 Running viddec3test application
- 1.8 Running a gstreamer pipeline
- 1.9 Running VIP/VPE/CAL application
- 1.10 Running DSS application
- 1.11 Gsttestplayer
- 1.12 Running IPC examples
- 1.13 Running basic Wifi tests
- 1.14 Running basic Bluetooth tests
- 1.15 How to bring up the GNSS driver and sample application
Running Examples[edit]
Running OMAP DRM DSS Examples[edit]
The drmclone, drmextended, and modetest examples demonstrates how to create a CRTC (i.e. FB) and display planes (overlays) on the CRTC. Additionally, drmtest demonstrates similar functionality as the previously mentioned demos, along with dynamic plane updates for 2 CRTCs.
Retrieve the omapdrm-tests source
git clone https://github.com/tomba/omapdrm-tests.git cd omapdrm-tests
Run (or example planescale)
./planescale
Graphics Demos from Command Line[edit]
The graphics driver and userspace libraries and binaries are distributed along with the SDK.
Graphic demos can also run from command line. In order to do so, exit Weston by pressing Ctrl-Alt-Backspace from the keyboard which connects to the EVM. Then, if the LCD screen stays in "Please wait...", press Ctrl-Alt-F1 to go to the command line on LCD console. After that, the command line can be used from serial console, SSH console, or LCD console.
|
Graphic demos can also run from command line. In order to do so, exit Weston by pressing Ctrl-Alt-Backspace from the keyboard which connects to the EVM. Then, if the LCD screen stays in "Please wait...", press Ctrl-Alt-F1 to go to the command line on LCD console. After that, the command line can be used from serial console, SSH console, or LCD console.
Please make sure the board is connected to atleast one display before running these demos.
Finding Connector ID[edit]
Note: Most of the applications used in the Demos would require the user to pass a connector id. A connector id is a number that is assigned to each of the display devices connected to the system. To get the list of the display devices connected and the corresponding connector id one can use the modetest application (shipped with the file system) as mentioned below:
target # modetest
Look for the display device for which the connector ID is required - such as HDMI, LCD etc.
Connectors: id encoder status type size (mm) modes encoders 4 3 connected HDMI-A 480x270 20 3 modes: name refresh (Hz) hdisp hss hse htot vdisp vss vse vtot) 1920x1080 60 1920 2008 2052 2200 1080 1084 1089 1125 flags: phsync, pvsync; type: preferred, driver ... 16 15 connected unknown 0x0 1 15 modes: name refresh (Hz) hdisp hss hse htot vdisp vss vse vtot) 800x480 60 800 1010 1040 1056 480 502 515 525 flags: nhsync, nvsync; type: preferred, driver
Usually, LCD is assigned 16 (800x480), and HDMI is assigned 4 (multiple resolutions). |
Connectors: id encoder status type size (mm) modes encoders 4 3 connected unknown 0x0 1 3 modes: name refresh (Hz) hdisp hss hse htot vdisp vss vse vtot) 1280x800 60 1280 1328 1360 1404 800 804 811 823 flags: nhsync, nvsync; type: preferred, driver ... 16 11 connected HDMI-A 700x390 31 11 modes: name refresh (Hz) hdisp hss hse htot vdisp vss vse vtot) 1280x720 60 1280 1390 1430 1650 720 725 730 750 flags: phsync, pvsync; type: preferred, driver
Usually LCD is assigned 4 (800x480), HDMI is assigned 16 (multiple resolutions).
Finding Plane ID[edit]
To find the Plane ID, run the modetest command:
target # modetest
Look for the section called Planes. (Sample truncated output of the Planes section is given below)
Planes: id crtc fb CRTC x,y x,y gamma size 19 0 0 0,0 0,0 0 formats: RG16 RX12 XR12 RA12 AR12 XR15 AR15 RG24 RX24 XR24 RA24 AR24 NV12 YUYV UYVY props: ... 20 0 0 0,0 0,0 0 formats: RG16 RX12 XR12 RA12 AR12 XR15 AR15 RG24 RX24 XR24 RA24 AR24 NV12 YUYV UYVY props: ...
kmscube[edit]
Run kmscube on default display (HDMI):
target # kmscube
Run kmscube on default display (LCD):
target # kmscube
Run kmscube on secondary display (HDMI):
target # kmscube -c <connector-id> target # kmscube -c 16 #Usually, the connector id for HDMI is 16.
Run kmscube on all connected displays (LCD & HDMI & FPDLink(optional)):
target # kmscube -a
Run kmscube on default display (HDMI):
target # kmscube
Run kmscube on secondary display (LCD):
target # kmscube -c <connector-id> target # kmscube -c 16 #Usually, the connector id for LCD is 16.
Run kmscube on all connected displays (LCD & HDMI):
target # kmscube -a
kmscube with video[edit]
This demo allows a video frame to be applied as a texture onto the surface of the kmscube. The user can invoke the demo by following the syntax below:
target # viddec3test <path_to_the_file> --kmscube --connector <connector_number>
This feature is not supported on OMAP5 based releases.
Run kmscube with video on default display (LCD):
target # viddec3test <path_to_the_file> --kmscube
Run kmscube with video on secondary display (HDMI):
target # viddec3test <path_to_the_file> --kmscube --connector 16 #Usually, the connector id for HDMI is 16.
Run kmscube with video on default display (HDMI):
target # viddec3test <path_to_the_file> --kmscube
Run kmscube with video on secondary display (LCD):
target # viddec3test <path_to_the_file> --kmscube --connector 16 #Usually, the connector id for HDMI is 16.
Additionally, to change the field of view of the rotating cube, the user can specify the same on the command line like below:
target # viddec3test <path_to_the_file> --kmscube --connector <connector_number> --fov <number>
Wayland/Weston[edit]
Wayland/Weston version brings in the multiple display support in extended desktop mode and the ability to drag-and-drop windows from one display to the other.
To execute the demos, the graphics driver must be initialized by running start weston, if this has not been done earlier.
target # /etc/init.d/weston start
To launch weston without using systemd init scripts, do the following:
On all connected displays (LCD, HDMI and FPDLink):
target # weston --tty=1 --backend=drm-backend.so
On default display (HDMI):
target # weston --tty=1 --connector=4
On secondary display (LCD):
target # weston --tty=1 --connector=16
On all connected displays (LCD and HDMI):
target # weston --tty=1
By default, the screensaver timeout is configured to 300 seconds.
The user can change the screensaver timeout using a command line option
--idle-time=<number of seconds>
To disable the screen timeout and to configure weston configured to display on all connectors, use the option with "0" as the input:
--idle-time=0
The filesystem comes with a preconfigured weston.ini file which will be located in
/etc/weston.ini
|
/etc/weston.ini
Running weston clients[edit]
Weston client examples can run from the command line on serial port console or SSH console.
After launching weston, the user should be able to use the keyboard and the mouse for various controls.
# /usr/bin/weston-flower # /usr/bin/weston-clickdot # /usr/bin/weston-cliptest # /usr/bin/weston-dnd # /usr/bin/weston-editor # /usr/bin/weston-eventdemo # /usr/bin/weston-image /usr/share/weston/terminal.png # /usr/bin/weston-resizor # /usr/bin/weston-simple-egl # /usr/bin/weston-simple-shm # /usr/bin/weston-simple-touch # /usr/bin/weston-smoke # /usr/bin/weston-info # /usr/bin/weston-terminal
| There is one icon on the top right hand corner of the weston desktop window which has been configured for
- weston-terminal
Clicking this icon should launch the applications on the Weston Desktop.
It is possible to add other icons by editing the weston.ini file.
There are several other applications that are included in the default filesystem. To invoke these applications, the user should launch the weston-terminal (top right hand corner of the desktop) and then invoke the client apps as described below from within the terminal window:
wayland sh # /usr/bin/weston-flower wayland sh # /usr/bin/weston-clickdot wayland sh # /usr/bin/weston-cliptest wayland sh # /usr/bin/weston-dnd wayland sh # /usr/bin/weston-editor wayland sh # /usr/bin/weston-eventdemo wayland sh # /usr/bin/weston-image /usr/share/weston/terminal.png wayland sh # /usr/bin/weston-resizor wayland sh # /usr/bin/weston-simple-egl wayland sh # /usr/bin/weston-simple-shm wayland sh # /usr/bin/weston-simple-touch wayland sh # /usr/bin/weston-smoke wayland sh # /usr/bin/weston-info wayland sh # /usr/bin/weston-terminal
Running multimedia with Wayland sink[edit]
The GStreamer video sink for Wayland is the waylandsink. To use this video-sink for video playback:
target # gst-launch-1.0 playbin uri=file://<path-to-file-name> video-sink=waylandsink
Exiting weston[edit]
Terminate all Weston clients before exiting Weston. If you have invoked Weston from the serial console, exit Weston by pressing Ctrl-C.
It is also possible to invoke Weston from the native console, exit Weston by using pressing Ctrl-Alt-Backspace.
Using IVI shell feature[edit]
The SDK also has support for configuring weston ivi-shell. The default shell that is configured in the SDK is the desktop-shell.
To change the shell to ivi-shell, the user will have to add the following lines into the /etc/weston.ini.
To switch back to the desktop-shell can be done by commenting these lines in the /etc/weston.ini (comments begin with a '#' at the start of line).
[core] shell=ivi-shell.so [ivi-shell] ivi-module=ivi-controller.so ivi-input-module=ivi-input-controller.so
After the above configuration is completed, we can restart weston by running the following commands
target# /etc/init.d/weston stop target# /etc/init.d/weston start
NOTE: When weston starts with ivi-shell, the default background is black, this is different from the desktop-shell that brings up a window with background.
With ivi-shell configured for weston, wayland client applications use ivi-application protocol to be managed by a central HMI window management. The wayland-ivi-extension provides ivi-controller.so to manage properties of surfaces/layers/screens and it also provides the ivi-input-controller.so to manage the input focus on a surface.
Applications must support the ivi-application protocol to be managed by the HMI central controller with an unique numeric ID.
Some important references to wayland-ivi-extension can be found at the following links: https://at.projects.genivi.org/wiki/display/WIE/01.+Quick+start https://at.projects.genivi.org/wiki/display/PROJ/Wayland+IVI+Extension+Design
Running weston’s sample client applications with ivi-shell[edit]
All the sample client applications in the weston package like weston-simple-egl, weston-simple-shm, weston-flower etc also have support for ivi-shell. The SDK includes the application called layer-add-surfaces which is part of the wayland-ivi-extension. This application allows the user to invoke the various functionalities of the ivi-shell and control the applications.
The following is an example sequence of commands and the corresponding effect on the target.
After launching the weston with the ivi-shell, please run the below sequence of commands:
target# weston-simple-shm &
At this point nothing is displayed on the screen, some additional commands are required.
target# layer_add_surfaces 0 1000 2 &
This command creates a layer with ID 1000 and to add maximum 2 surfaces to this layer on the screen 0 (which is usually the LCD).
At this point, the user can see weston-simple-shm running on LCD. This also prints the numericID (surfaceID) to which client’s surface is mapped as shown below:
CreateWithDimension: layer ID (1000), Width (1280), Height (800) SetVisibility : layer ID (1000), ILM_TRUE layer: 1000 created surface : 10369 created SetDestinationRectangle: surface ID (10369), Width (250), Height (250) SetSourceRectangle : surface ID (10369), Width (250), Height (250) SetVisibility : surface ID (10369), ILM_TRUE layerAddSurface : surface ID (10369) is added to layer ID (1000)
Here 10369 is the number to which weston-simple-shm application’s surface is mapped.
User can launch one more client application which allows layer_add_surfaces to add second surface to the layer 1000 as shown below.
target# weston-flower &
User can control the properties of the above surfaces using LayerManagerControl as shown below to set the position, resize, opacity and visibility respectively.
target# LayerManagerControl set surface 10369 position 100 100 target# LayerManagerControl set surface 10369 destination region 150 150 300 300 target# LayerManagerControl set surface 10369 opacity 0.5 target# LayerManagerControl set surface 10369 visibility 1
target# LayerManagerControl help
The help option prints all possible control operations with the LayerManagerControl binary, please refer to the available options.
IMG PowerVR Demos[edit]
The Processor SDK Linux Automotive filesystem comes packaged with example OpenGLES applications. Both DRM and Wayland based applications are packaged as part of the filesystem.
The examples running on Wayland can be invoked using the below commands.
target # /usr/bin/SGX/demos/Wayland/OGLES2ChameleonMan target # /usr/bin/SGX/demos/Wayland/OGLES2Navigation
The examples running on DRM/KMS can be invoked using the below commands.
target # /usr/bin/SGX/demos/Raw/OGLES2ChameleonMan target # /usr/bin/SGX/demos/Raw/OGLES2Navigation
After you see the output on the display interface, hit q to terminate the application.
Using the PowerVR Tools[edit]
Please refer to http://community.imgtec.com/developers/powervr/graphics-sdk/ for additional details on the tools and detailed documentation.
The target file system includes tools such as PVRScope and PVRTrace recorder libraries from Imagination PowerVR SDK to profile and trace SGX activities. In addition, it also includes PVRPerfServerDeveloper toolfor Jacinto6 platform.
PVRTune[edit]
PVRPerfServerDeveloper tool can be used along with the PVRTune running on the PC to gather data on the SGX loading and activity threads. You can invoke the tool with the below command:
target # /opt/img-powervr-sdk/PVRHub/PVRPerfServer/PVRPerfServerDeveloper
PVRTrace[edit]
The default filesystem contains helper scripts to obtain the PVRTrace of the graphics application. This trace can then be played back on the PC using the PVRTrace Utility.
To start tracing, use the below commands as reference:
target # cp /opt/img-powervr-sdk/PVRHub/Scripts/start_tracing.sh ~/. target # ./start_tracing.sh <log-filename> <application-to-be-traced>
Example:
target # ./start_tracing.sh westonapp weston-simple-egl
The above command will do the following:
- Setup the required environment for the tracing
- Create a directory under the current working directory called pvrtrace
- Launch the application specified by the user
- Start tracing the PVR Interactions and record the same to the log-filename
To end the tracing, user can invoke the Ctrl-C and the trace file path will be displayed.
The trace file can then be transferred to a PC and we can visualize the application using the host side PVRTrace utility. Please refer to the link at the beginning of this section for more details.
Testing DSS WB pipeline[edit]
Memory to Memory (M2M)[edit]
Identify the WB pipeline M2M device.
# ls /sys/class/video4linux/ Video0 video10 video11 # cat /sys/class/video4linux/video10/name omapwb-m2m
Look at list of formats supported.
# v4l2-ctl -d /dev/video10 --list-formats ioctl: VIDIOC_ENUM_FMT Index : 0 Type : Video Capture Multiplanar Pixel Format: 'NV12' Name : Y/CbCr 4:2:0 Index : 1 Type : Video Capture Multiplanar Pixel Format: 'YUYV' Name : YUYV 4:2:2 Index : 2 Type : Video Capture Multiplanar Pixel Format: 'UYVY' Name : UYVY 4:2:2 Index : 3 Type : Video Capture Multiplanar Pixel Format: 'XR24' Name : 32-bit BGRX 8-8-8-8
Use
v4l2-ctl
command to test the input output. Below command converts from NV12 to YUYV using WB pipeline in M2M mode.# v4l2-ctl -d /dev/video10 --set-fmt-video-out=width=1920,height=1080,pixelformat=NV12 \ --stream-from=test/BigBuckBunny_1920_1080_24fps_100frames.nv12 \ --set-fmt-video=width=1920,height=1080,pixelformat=YUYV \ --stream-to=out/video_test_file.yuyv --stream-mmap=3 --stream-out-mmap=3 --stream-count=70 --stream-poll
Capture[edit]
Identify the WB pipeline capture device.
# ls /sys/class/video4linux/ Video0 video10 video11 # cat /sys/class/video4linux/video11/name omapwb-cap
Look at list of formats supported.
# v4l2-ctl -d /dev/video11 --list-formats ioctl: VIDIOC_ENUM_FMT Index : 0 Type : Video Capture Multiplanar Pixel Format: 'NV12' Name : Y/CbCr 4:2:0 Index : 1 Type : Video Capture Multiplanar Pixel Format: 'YUYV' Name : YUYV 4:2:2 Index : 2 Type : Video Capture Multiplanar Pixel Format: 'UYVY' Name : UYVY 4:2:2 Index : 3 Type : Video Capture Multiplanar Pixel Format: 'XR24' Name : 32-bit BGRX 8-8-8-8
Use
v4l2-ctl
command to test the input output. Below command converts from NV12 to YUYV using WB pipeline in M2M mode.# v4l2-ctl -d /dev/video11 -i 0 --set-fmt-video=pixelformat=NV12 \ --stream-to=/test/video_test_file.yuv --stream-mmap=6 --stream-count=10 --stream-poll Video input set to 0 (CRTC#0 - LCD1: ok) <<<<<<<<< 7.84 fps < # v4l2-ctl -d /dev/video11 -i 1 --set-fmt-video=pixelformat=NV12 --stream-to=/test/video _test_file.yuv --stream-mmap=6 --stream-count=10 --stream-poll Video input set to 1 (CRTC#1 - DIGIT/TV: ok) <<<<<<<<<< 8.65 fps
Running aplay and arecord application[edit]
Audio playback is supported on HDMI and via headset. By default, the audio playback takes place on the HDMI. To listen to audio via the HDMI, run the aplay application
target # aplay <path_to_example_audio>.wav
If playback is required via headset, please make sure that the following amixer settings are done for the corresponding card (check the card no. by running the command cat /proc/asound/cards, assuming the card 1 is for headset here):
target # amixer cset -c 1 name='Headset Left Playback' 1 target # amixer cset -c 1 name='Headset Right Playback' 1 target # amixer cset -c 1 name='Headset Playback Volume' 12 target # amixer cset -c 1 name='DL1 PDM Switch' 1 target # amixer cset -c 1 name='Sidetone Mixer Playback' 1 target # amixer cset -c 1 name='SDT DL Volume' 120 target # amixer cset -c 1 name='DL1 Mixer Multimedia' 1 target # amixer cset -c 1 name='DL1 Media Playback Volume' 110 target # amixer sset -c 1 'Analog Left',0 'Aux/FM Left' target # amixer sset -c 1 'Analog Right',0 'Aux/FM Right' target # amixer sset -c 1 'Aux FM',0 7 target # amixer sset -c 1 'AUDUL Media',0 149 target # amixer sset -c 1 'Capture',0 4 target # amixer sset -c 1 MUX_UL00,0 AMic0 target # amixer sset -c 1 MUX_UL01,0 AMic1 target # amixer sset -c 1 'AMIC UL',0 120
Once these settings are done, one could do playback via headset using aplay by the following command:
target # aplay -Dplughw:1,0 <path_to_example_audio>.wav
To playback/record on the evm via headset/mic, please make sure the following amixer settings are done:
- For playback via headset, enter the following at prompt target#
amixer sset 'Left DAC Mux',0 'DAC_L2' amixer sset 'Right DAC Mux',0 'DAC_R2' amixer cset name='HP Playback Switch' On amixer cset name='Line Playback Switch' Off amixer cset name='PCM Playback Volume' 127
Once these settings are successful, use aplay application for playback:
target # aplay <path_to_example_audio>.wav
- For recording via Mic In
amixer cset name='Left PGA Mixer Mic3L Switch' On amixer cset name='Right PGA Mixer Mic3L Switch' On amixer cset name='Left PGA Mixer Line1L Switch' off amixer cset name='Right PGA Mixer Line1R Switch' off amixer cset name='PGA Capture Switch' on amixer cset name='PGA Capture Volume' 6
Once these settings are successful, use arecord to record
target # arecord -r 44.1 > <path_to_example_audio>.wav
To playback/record on the evm via Line Out/ Line In, please make sure the following amixer settings are done:
- For playback via Line Out, enter the following at prompt target#
amixer cset name='Line Playback Switch' On amixer cset name='PCM Playback Volume' 127
Once these settings are successful, use aplay application for playback, e.g.,
target # aplay <path_to_example_audio>.wav
- For recording via Line In
amixer cset name='Left PGA Mixer Mic2L Switch' On amixer cset name='Right PGA Mixer Mic2L Switch' On amixer cset name='Left PGA Mixer Line1L Switch' On amixer cset name='Right PGA Mixer Line1R Switch' On amixer cset name='PGA Capture Switch' On amixer cset name='PGA Capture Volume' 50
Once these settings are successful, use arecord to record, e.g.,
target # arecord -r 44100 -c 2 -f S16_LE <path_to_example_audio>.wav
Running GC320 application[edit]
GC320 is a 2D Graphic accelerator in DRA7xx. This IP can be utilized for the usecases like alpha blending, overlaying, bitBlit, color conversion, scaling, rotation etc.
SDK provides two sample GC320 testcases in the root filesystem. Before running the test, the gc320 kernel module needs to be inserted into system.
On 1.5GB RAM configuration
target# insmod /lib/modules/4.4.xx-gyyyyyyyy/extra/galcore.ko baseAddress=0x80000000 physSize=0x60000000
On 2GB RAM configuration
target# insmod /lib/modules/4.4.xx-gyyyyyyyy/extra/galcore.ko baseAddress=0x80000000 physSize=0x80000000
Now follow these instrctions to execute the applications
target# cd /usr/bin/GC320/tests/unit_test target# export LD_LIBRARY_PATH=$PWD target# ./runtest.sh
This script executes two sample unit test cases of filling rectangles and GC320 rendered results will be stored in .bmp file in a directory "result" under /usr/bin/GC320/tests/unit_test.
Note: To run all GC320 unit testcases, clone ti-gc320-test package from git://git.ti.com/graphics/ti-gc320-test.git : branch:ti-5.0.11.p7, rebuild test application, libraries etc and install the package on target.
Running viddec3test application[edit]
viddec3test is a demo application for decoder/video playback using hardware accelerators. The application currently runs on the kms display. The application requires the connector information for display. One can get the information of the display connected to the board by running the modetest application in the filesystem, as described above. To execute the application "modetest" make sure the display is connected to the board.
Running a decode on a display
To run a hardware decode on a display connected to the board, execute the following command:
target # viddec3test -s <connector_id>:<display resolution> filename --fps 30 e.g.: target # viddec3test -s 4:1920x1080 file.h264 --fps 30
Running single decode on dual displays
To run the output of a single decode on the dual displays. Please make sure both the displays are connected and get the information about the connectors and the resolution associated with it for both the displays from the modetest application.
target # viddec3test -s <connector_id_1>:<display resolution> -s <connector_id_2>:<display resolution> filename --fps 30 e.g.: target # viddec3test -s 4:1920x1080 -s 12:1024x768 file.h264 --fps 30
Running dual decode on dual displays
One can also run a dual decode and display their output on two different displays. Please make sure both the displays are connected and get the information about the connectors and the resolution associated with it for both the displays from the modetest application.
target # viddec3test -s <connector_id_1>:<display resolution> filename1 -s <connector_id_2>:<display resolution> filename2 e.g.: target # viddec3test -s 4:1920x1080 file1.h264 -- -s 12:1024x768 file2.h264
Running a gstreamer pipeline[edit]
GStreamer v1.14.4 is supported in Processor SDK | in Processor SDK Linux Automotive 6.00
Gstreamer pipelines can also run from command line. In order to do so, exit Weston by pressing Ctrl-Alt-Backspace from the keyboard which connects to the EVM. Then, if the LCD screen stays in "Please wait...", press Ctrl-Alt-F1 to go to the command line on LCD console. After that, the command line can be used from serial console, SSH console, or LCD console.
One can run an audio video file using the gstreamer playbin from the console. Currently, the supported Audio/video sink is kmssink, waylandsink and alsassink.
kmssink: target # gst-launch playbin2 uri=file:///<path_to_file> video-sink=kmssink audio-sink="alsasink device=hw:1,0"
waylandsink: 1. refer #Wayland/Weston to start the weston 2. target # gst-launch playbin2 uri=file:///<path_to_file> video-sink=waylandsink audio-sink="alsasink device=hw:1,0"
dri2videosink: 1. refer #X_Server to start the X11 2. target # gst-launch playbin2 uri=file:///<path_to_file> video-sink=dri2videosink audio-sink="alsasink device=hw:1,0"
|
kmssink: target # gst-launch-1.0 playbin uri=file:///<path_to_file> video-sink=kmssink audio-sink=alsasink
waylandsink: 1. refer #Wayland/Weston to start the weston 2. target # gst-launch-1.0 playbin uri=file:///<path_to_file> video-sink=waylandsink audio-sink=alsasink
The following pipelines show how to use vpe for scaling and color space conversion.
1. Decode-> Scale->Display target # gst-launch-1.0 -v filesrc location=example_h264.mp4 ! qtdemux ! h264parse ! \ ducatih264dec ! vpe ! 'video/x-raw, format=(string)NV12, width=(int)720, height=(int)480' ! kmssink
2. Color space conversion: target # gst-launch-1.0 -v videotestsrc ! 'video/x-raw, format=(string)YUY2, width= \ (int)1280, height=(int)720' ! vpe ! 'video/x-raw, format=(string)NV12, width=(int)720, height=(int)480' \ ! kmssink
Note: 1. While using playbin for playing the stream, vpe plugin is automatically picked up. However vpe cannot be used with playbin for scaling. For utilizing scaling capabilities of vpe, using manual pipeline given above is recommended. 2. Waylandsink and Kmssink uses the cropping metadata set on buffers and does not require vpe plugin for cropping
The following pipelines show how to use v4l2src and ducatimpeg4enc elements to capture video from VIP and encode captured video respectively.
Capture and Display Fullscreen target # gst-launch-1.0 v4l2src device=/dev/video1 num-buffers=1000 io-mode=4 ! 'video/x-raw, \ format=(string)YUY2, width=(int)1280, height=(int)720' ! vpe num-input-buffers=8 ! queue ! kmssink
Note: The following pipelines can also be used for NV12 capture-display usecase. Dmabuf is allocated by v4l2src if io-mode=4 and by kmssink and imported by v4l2src if io-mode=5 target # gst-launch-1.0 v4l2src device=/dev/video1 num-buffers=1000 io-mode=4 ! 'video/x-raw, \ format=(string)NV12, width=(int)1280, height=(int)720' ! kmssink target # gst-launch-1.0 v4l2src device=/dev/video1 num-buffers=1000 io-mode=5 ! 'video/x-raw, \ format=(string)NV12, width=(int)1280, height=(int)720' ! kmssink
Capture and Display to a window in wayland 1. refer #Wayland/Weston to start the weston 2. target # gst-launch-1.0 v4l2src device=/dev/video1 num-buffers=1000 io-mode=4 ! 'video/x-raw, \ format=(string)YUY2, width=(int)1280, height=(int)720' ! vpe num-input-buffers=8 ! queue ! waylandsink
Note: The following pipelines can also be used for NV12 capture-display usecase. Dmabuf is allocated by v4l2src if io-mode=4 and by waylandsink and imported by v4l2src if io-mode=5. Waylandsink supports both shm and drm. A new property use-drm is added to specify drm allocator based bufferpool to be used. When using ducati or vpe plugins, use-drm is set in caps as true. target # gst-launch-1.0 v4l2src device=/dev/video1 num-buffers=1000 io-mode=4 ! 'video/x-raw, \ format=(string)NV12, width=(int)1280, height=(int)720' ! waylandsink use-drm=true target # gst-launch-1.0 v4l2src device=/dev/video1 num-buffers=1000 io-mode=5 ! 'video/x-raw, \ format=(string)NV12, width=(int)1280, height=(int)720' ! waylandsink use-drm=true
Capture and Encode into a MP4 file. target # gst-launch-1.0 -e v4l2src device=/dev/video1 num-buffers=1000 io-mode=4 ! 'video/x-raw, \ format=(string)YUY2, width=(int)1280, height=(int)720, framerate=(fraction)30/1' ! vpe num-input-buffers=8 ! \ queue ! ducatimpeg4enc bitrate=4000 ! queue ! mpeg4videoparse ! qtmux ! filesink location=x.mp4
Note: The following pipeline can be used in usecases where vpe processing is not required. target # gst-launch-1.0 -e v4l2src device=/dev/video1 num-buffers=1000 io-mode=5 ! 'video/x-raw, \ format=(string)NV12, width=(int)1280, height=(int)720, framerate=(fraction)30/1' ! ducatimpeg4enc bitrate=4000 ! \ queue ! mpeg4videoparse ! qtmux ! filesink location=x.mp4
Capture and Encode and Display in parallel. target # gst-launch-1.0 -e v4l2src device=/dev/video1 num-buffers=1000 io-mode=4 ! 'video/x-raw, \ format=(string)YUY2, width=(int)1280, height=(int)720, framerate=(fraction)30/1' ! vpe num-input-buffers=8 ! tee name=t ! \ queue ! ducatimpeg4enc bitrate=4000 ! queue ! mpeg4videoparse ! qtmux ! filesink location=x.mp4 t. ! queue ! kmssink
|
Running VIP/VPE/CAL application[edit]
Video Input Port[edit]
Video Input Port is used to capture video frames from BT56/ BT601 Camera. Currently the VIP driver supports following features.
For more information on VIP driver and other features, please refer to http://processors.wiki.ti.com/index.php/Processor_SDK_VIP_Driver
- Standard V4L2 capture driver
- Supports single planar buffers
- Supports MMAP buffering method
- Supports DMABUF based buffering method
- Supports V4L2 endpoint standard way of specifying camera nodes
- Supports captures upto 60FPS
- Multi instance capture - All slices, ports supported
- Capture from a YUYV camera(8bit)
- NV12 capture format
Camera Adapter Layer[edit]
Camera Adapter Layer is used to capture video from CSI Camera. Currently the CAL driver supports following features.
- Standard V4L2 capture driver
- Supports single planar buffers
- Supports MMAP buffering method
- Supports DMABUF based buffering method
- Supports V4L2 endpoint standard way of specifying camera nodes (CSI bindings)
- Multi instance capture - 4data lane phy0 + 2data lane phy1
Supported cameras[edit]
Camera Adapter Layer is used to capture video from CSI Camera. Currently the CAL driver supports following features. Processor SDK Linux Automotive release supports following sensors/cameras/video inputs:-
- OV10633 sensor - YUYV sensor connected on J6 EVM
- OV10635 sensor - YUYV sensor on Vision board
- OV10635 sensor - YUYV sensor connected through LVDS
- TVP5158 decoder - Support for decoding single channel analog video
- OV10640/OV490 - 720p CSI2 raw camera connected to OV490 ISP in YUYV format
Processor SDK supports following sensors/cameras
- mt9t111 camera sensor
Note: This release of PSDKLA with kernel 4.19, supports only OV10633 sensor.
OV10635 sensor capture on Vision Board and LVDS Capture are supported with VisionSDK v3.8.
FPDLink serializer and deserializer support is also not available with this release.
JAMR board support is not validated with this release
CSI2 capture not validated with this release
Running dmabuftest[edit]
dmabuftest is a user space application which demonstrates capture display loopback. It can support multiple captures at the same time
Video buffers are allocated by libdrm and they are shared to VIP through dmabuf.
It interfaces with the VIP through standard v4l2 ioctls.
Filesystem from release has dmabuftest app preinstalled.
To capture and display on the LCD screen, run following command
target# dmabuftest -s 4:800x480 -d /dev/video1 -c 1280x720@YUYV
target# dmabuftest -s 16:800x480 -d /dev/video1 -c 1280x720@YUYV
To capture and display on the HDMI display, run following command
target# dmabuftest -s 32:1920x1200 -d /dev/video1 -c 1280x720@YUYV
target# dmabuftest -s 4:1920x1080 -d /dev/video1 -c 1280x720@YUYV
To capture video in NV12 format, run following command
target# dmabuftest -s 32:1920x1200 -d /dev/video1 -c 1280x720@NV12
target# dmabuftest -s 16:800x480 -d /dev/video1 -c 1280x720@NV12
To capture and display on KMScube backend (Video on a rotating cube), run following command
target# dmabuftest --kmscube --fov 20 -d /dev/video1 -c 1280x720@YUYV This feature is currently not supported
To capture and display on wayland backend (Video in a wayland client window), run following command
target# dmabuftest -w 640x480 --pos 100x400 /dev/video1 -c 1280x720@YUYV
Capturing from OV10633 onboard camera[edit]
Linux kernel driver for OV1063x cameras support OV10633 sensor.
Video capture can be verified from the OV10633 sensor as follows
- Connect OV10633 sensor to the Leopard Imaging port on the J6 EVM
- Reboot the board and enable i2c2 as given above
- I2C device on Bus 2 slave address 0x37 should be probed successfully
- VIP should register a V4L2 video device (e.g. /dev/video1) using this i2c device
- Run dmabuftest with '1280x720@YUYV' as capture format
Capturing from OV10635 Vision board camera[edit]
Linux kernel driver for OV1063x cameras support OV10635 sensor.
Video capture can be verified from the OV10635 sensor as follows
- Connect OV10635 sensor to the OVcam port on the Vision board
- Change the SW3 switch setting on Vision board as SW3[1-8] = 01010101
- Reboot the board and enable i2c2 as given above
- I2C device on Bus 2 slave address 0x30 should be probed successfully
- VIP should register a V4L2 video device (e.g. /dev/video1) using this i2c device
- Run dmabuftest with '1280x720@YUYV' as capture format
Capturing through TVP decoder[edit]
Linux kernel supports TVP5158 NTSC/PAL decoder.
TVP5158 decoder is a TI chip which can decode upto 4 channels of NTSC/PAL analog video and multiplex it.
Video capture from 1 channel TVP5158 can be verified as follows.
- Connect analog camera to the Vin1 port of the JAMR3 board
- Change the SW2 switch setting on JAMR board as SW2[1-2] = [OFF, ON] - This is to select i2c4 for the IO expander
- Reboot the board and enable i2c2 as given above
- I2C device on Bus 2 slave address 0x58 should be probed successfully
- VIP should register a V4L2 video device (e.g. /dev/video1) using this i2c device
- Run dmabuftest with capture format of the analog camera (e.g. '720x240@YUYV')
Capturing through LVDS camera[edit]
LVDS camera is also a camera connected through a serializer and deserializer
Linux kernel has driver for FPDlink serializers and deserializers
For interfacing every LVDS camera with J6, an I2C slave for ser, deser and camera is needed. By default, all of the device tree nodes are disabled.
Following table shows mapping between all LVDS cameras on multi deserializer duaghter card for Vision Board.
LVDS camera Camera address alias Serializer address alias Derializer address VIP port
cam1 0x38 0x74 0x60 Vin1a(VIP1 slice0 port A) cam2 0x39 0x75 0x64 Vin2a(VIP1 slice1 port A) cam3 0x3A 0x76 0x68 Vin3a(VIP2 slice0 port A) cam4 0x3B 0x77 0x6C Vin5a(VIP3 slice0 port A) cam5 0x3C 0x78 0x61 Vin4b(VIP2 slice1 port B) cam6 0x3D 0x79 0x69 Vin6a(VIP3 slice1 port A)
Video capture from LVDS camera can be verified as follows.
- Connect a LVDS camera to cam1/2/3/4 port of Multides board.
- Change the SW3 switch setting on Vision board as SW3[1-8] = 00100101
- I2C device on Bus 2 slave address (e.g. 0x38 for cam1) should be probed successfully
- VIP should register a V4L2 video device (e.g. /dev/video1) using this i2c device
- Run dmabuftest with '1280x720@YUYV' as capture format
Capturing through OV10640/OV490 CSI camera/ISP[edit]
Linux kernel supports CSI capture from OV10640 RAW camera and OV490 ISP.
CAL works on the CSI2 protocol and supports both raw and YUYV capture. It is verified with the OV10640 raw camera and OV490 ISP. TI-EVM has support for capturing via two CSI phys. - phy0 (4data lanes) and phy1 (2data lanes)
Video capture from OV490 can be verified as follows.
- Connect OV10640 camera to the OV490 board
- Connect the OV490 board to the TI-EVM via the CSI2 dual 490 adaptor board
- I2C device on Bus 4 slave address 0x24 should be probed successfully
- VIP should register a V4L2 video device (e.g. /dev/video1) using this i2c device
- Run dmabuftest with capture format of the analog camera (e.g. '1280x720@YUYV')
Video Processing Engine(VPE)[edit]
VPE supports Scalar, Colour Space Conversion and Deinterlace.It uses V4L2 mem2mem API.
Supported Input formats: nv12, yuyv, uyvy
Supported Output formats: nv12, yuyv, uyvy, rgb24, bgr24, argb24, abgr24
Not Supported formats: yuv444, yvyu, vyuy, nv16, nv61, nv21
File to File[edit]
test-v4l2-m2m Usage: <SRCfilename> <SRCWidth> <SRCHeight> <SRCFormat> <DSTfilename> <DSTWidth> <DSTHeight> <DSTformat> <interlace> <translen>
Note:
<interlace> : set 1, If input is interlaced and want deinterlaced(progressive) output. output height should be twice of input height.
Deinterlace(DI):-
target# test-v4l2-m2m /dev/video0 frame-176-144-nv12-inp.yuv 176 144 nv12 progressive_output.nv12 176 288 nv12 1 1
Scalar(SC):-
target# test-v4l2-m2m /dev/video0 frame-176-144-nv12-inp.yuv 176 144 nv12 frame-1920-1080-nv12-out.nv12 1920 1080 nv12 0 1
Colour Space Conversion(CSC):-
target# test-v4l2-m2m /dev/video0 frame-720-240-yuyv-inp.yuv 720 240 yuyv frame-720-240-argb32-out.argb32 720 240 argb32 0 1
SC+CSC+DI:-
target# test-v4l2-m2m /dev/video0 frame-720-240-yuyv-inp.yuv 720 240 yuyv frame-1920-1080-rgb24-dei-out.rgb24 1920 1080 rgb24 1 1
File to Display[edit]
filevpedisplay Usage: <src_filename> <src_w> <src_h> <src_format> <dst_w> <dst_h> <dst_format> <top> <left> <w> <h> <inter> <trans> -s <conn_id>:<mode>
Input without crop:
target# filevpedisplay frame-176-144-nv12-inp.yuv 176 144 nv12 800 480 yuyv 0 0 176 144 0 1 -s 4:800x480
Input with crop:
target# filevpedisplay frame-176-144-nv12-inp.yuv 176 144 nv12 800 480 yuyv 16 32 128 128 0 1 -s 4:800x480
Input without crop:
target# filevpedisplay frame-176-144-nv12-inp.yuv 176 144 nv12 800 480 yuyv 0 0 176 144 0 1 -s 16:800x480
Input with crop:
target# filevpedisplay frame-176-144-nv12-inp.yuv 176 144 nv12 800 480 yuyv 16 32 128 128 0 1 -s 4:1280x720
VIP-VPE-Display[edit]
Camera captures the frames, which are processed by VPE(SC, CSC, Dei) then displays on LCD/HDMI.
capturevpedisplay Usage: <src_w> <src_h> <src_format> <dst_w> <dst_h> <dst_format> <inter> <trans> -s <conn_id>:<mode>
target# capturevpedisplay 640 480 yuyv 320 240 uyvy 0 1 -s 4:640x480
target# capturevpedisplay 640 480 yuyv 320 240 uyvy 0 1 -s 4:1280x720
Running DSS application[edit]
DSS applications are omapdrm based. These will demonstrate the clone mode, extended mode, overlay window, z-order and alpha blending features. To demonstrate clone and extended mode, HDMI display must be connected to board. Application requires the supported mode information of connected displays and plane ids. One can get these information by running the modetest application in the filesystem.
target # modetest
DSS application is omapdrm based. This will demonstrate the z-order and alpha blending features.
HDMI display must be connected to board. Application requires the supported mode information of connected display and plane ids. One can get these information by running the modetest application in the filesystem.
target # modetest
Running drmclone application
This displays same test pattern on both LCD and HDMI (clone). Overlay window also displayed on LCD. To test clone mode, execute the following command:
target # drmclone -l <lcd_w>x<lcd_h> -p <plane_w>x<plane_h>:<x>+<y> -h <hdmi_w>x<hdmi_h>
e.g.: target # drmclone -l 1280x800 -p 320x240:0+0 -h 640x480
We can change position of overlay window by changing x+y values. eg. 240+120 will show @ center
Running drmextended application
This displays different test pattern on LCD and HDMI. Overlay window also displayed on LCD. To test extended mode, execute the following command:
target # drmextended -l <lcd_w>x<lcd_h> -p <plane_w>x<plane_h>:<x>+<y> -h <hdmi_w>x<hdmi_h>
e.g.: target # drmextended -l 1280x800 -p 320x240:0+0 -h 640x480
Running drmzalpha application
Z-order:
It determines, which overlay window appears on top of the other.
Range: 0 to 3
lowest value for bottom
highest value for top
Alpha Blend:
It determines transparency level of image as a result of both global alpha & pre multiplied alpha value.
Global alpha range: 0 to 255
0 - fully transparent
127 - semi transparent
255 - fully opaque
Pre multipled alpha value: 0 or 1
0 - source is not premultiply with alpha
1 - source is premultiply with alpha
To test drmzalpha, execute the following command:
target # drmzalpha -s <crtc_w>x<crtc_h> -w <plane1_id>:<z_val>:<glo_alpha>:<pre_mul_alpha> -w <plane2_id>:<z_val>:<glo_alpha>:<pre_mul_alpha>
e.g.: target # drmzalpha -s 1280x800 -w 19:1:255:1 -w 20:2:255:1
To test drmzalpha, execute the following command:
target # drmzalpha -s <crtc_w>x<crtc_h> -w <plane1_id>:<z_val>:<glo_alpha>:<pre_mul_alpha> -w <plane2_id>:<z_val>:<glo_alpha>:<pre_mul_alpha> e.g.: target # drmzalpha -s 640x480 -w 15:1:255:1 -w 16:2:255:1
Testing with FPDLink Display setup[edit]
NOTE! Support for FPDLink display is available upto K4.4 releases. PSDKLA6.0x release with k4.19 doesn't have support for FPDLInk. Check the release notes to see whether FPDLink is supported on the Processor SDK Linux Automotive release you are using.
For information on debugging FPDLink integration, please refer to Debugging FPDLink integration
Current H/W setup[edit]
FPDLink display is currently supported with Spectrum Digital FPDLink display part number 703840-0001. This display includes a 1280x800 AUO LCD panel with Goodix touch screen connected over a DS90UB924Q1 deserializer.
To validate FPDLink with the current HW setup, below hardware is required.
- DRA7xx EVM + 12V supply for the EVM.
- FPDLink Cable between DRA7xx and FPDLink display
- 12 V power supply for the FPDLink display if using a J6/J6 Eco/J6 Entry EVM. J6 Plus EVM supplies power to the display over FPDLink. Power supply for display is not required in this case.
The picture below shows the overall setup.
Kernel Config modifications are not necessary as AUO panel support and fpdlink support are built into the kernel.
To test the FPDLink display,
- Use the device tree
dra7-evm-fpd-auo-g101evn01.0.dtb
to boot. - Add
omapdrm.num_crtc=2
to the kernel boot arguments. The above device tree will enable both HDMI and FPDlink LCD. - Power on the EVM and the check the modetest output. You should see two connectors now, one for HDMI and another for FPDLink.
Legacy H/W setup[edit]
Please note that support for the below FPDLink hardware will be deprecated with the next release. This is due to availability of the single board FPDLink display listed above.
To validate FPDLink with the legacy HW setup, below hardware is required.
- DRA7xx EVM + 12V supply for the EVM.
- FPDLink Cable between DRA7xx and De-serilzer board (DS90UB928Q).
- 5V power supply for De-serializer board.
- LCD Adapter board (DS90UH928Q) that sits on De-serializer board.
- LCD Adapter cable which is between LCD panel and the Adapter board.
- 12V power supply for LCD Adapter board.
- The actual LCD panel (LG101(10.1in) or AUO (7.1 in))
The picture below shows the overall setup.
Kernel Config is not necessary as the supported panels and fpdlink are built into the kernel.
To test the FPDLink display,
- Use the device tree
dra7-evm-fpd-lg.dtb
to boot. - Add
omapdrm.num_crtc=2
to the kernel boot arguments. The above device tree will enable both HDMI and FPDlink LCD. - Power on the EVM and the check the modetest output. You should see two connectors now, one for HDMI and another for FPDLink.
HW Modifications required[edit]
With the Rev B J6 Plus EVM's, a board modification is required to supply the pixel clock to the FPDLink connector. The modification required is shown in the below image.
}}
Gsttestplayer[edit]
gsttestplayer is a gstreamer test application useful for testing some features not testable with gst-launch-1.0 such as:
- Seek - Seeking to random points in a stream
- Trick play - Playback at different speeds (fast forward, rewind)
- Pause, Resume
- Playing multiple streams simultaneously in the same process, in a loop or one after another.
Running gsttestplayer[edit]
Command line options:
target # gsttestplayer -h Usage: gsttestplayer <options> -s <sinkname> Specify the video sink name to be used, default: kmssink -n Do not use VPE, implies no scaling -r <width>x<height> Resize the output to widthxheight, no scaling if left blank -a Play with no A/V Sync -c <cmds file> Non-interactive mode, reading commands from <cmds file> --help-gst Show GStreamer Options
Example: To use waylandsink and resize the output video to 800x400.
target # gsttestplayer -s waylandsink -r 800x400
In normal mode, when -c option is not used, the application enters an command prompt at which the user enter various commands. Type "help" to print out the list of possible commands:
target # gsttestplayer -s waylandsink -r 800x400 Scaling output to 800x400 Using videosink=waylandsink <Enter ip> help Commands available: start <instance num> <filename/capture device> stop <instance num> pause <instance num> resume <instance num> seek <instance num> <seek to time in seconds> <optional: playback speed> sleep <sleep time in seconds> msleep <sleep time in milliseconds> rewind <line number> exit <Enter ip>
Example commands:
start 0 KMS_MPEG4_D1.MP4 # Start playing the file "KMS_MPEG4_D1.MP4", using instance 0. start 1 NTSC_h264.mp4 # Start playing the file "NTSC_h264.mp4" (simultaneously) using instance 1. stop 0 # Stop playback of instance 0. seek 0 0 2 # Seek to "0"th second mark of the stream playing in instance 0, # and start playing back at speed 2x. seek 0 300 -1 # Seek to "300"th second mark of the stream playing in instance 0, # and start playing back reserve at speed 1x. start 2 /dev/video1 # Start capturing from /dev/video1 using the v4l2src plugin
All these commands could be put into a text file and given as input to gsttestplayer with the "-c" option. In this case, gsttestplayer runs non-interactively, reading commands from the text file one line after another. The commands sleep and rewind are useful for this mode, to introduce delays or to create a loop respectively.
Notes:
- This application plays video only. Audio path is not used.
- The input filename should have the correct file extension to indicate the type of file. The supported extensions are "mp4", "avi", "ts", "asf" & "wmv".
- The input filename should contain the string "264", "mpeg2", "mpeg4" or "vc1"/"wmv" to indicate which video codec should be used for decoding - H.264, MPEG-2, MPEG-4 or Windows Media Video.
- If the input filename is a video device which matches /dev/videoX pattern, v4l2src plugin would be used for video capture instead of playback.
- Decode and capture can be run in parallel depending on the sink being used.
Running IPC examples[edit]
Processor SDK Linux Automotive includes IPC examples are part of the target filesystem.
User space sample application[edit]
MessageQ is the user space API provided for IPC. The sample application for MessageQ consists of an application "MessageQApp" running on the A15 and corresponding binaries running on the remotecore. The below table shows the paths under which the remotecore binaries can be found on the target filesystem. To ensure that these binaries are loaded by the kernel, please symbolic link them to the location shown in the below table.
Core | Binary path on target relative to /lib/firmware | Binary should be symlinked to |
---|---|---|
DSP2 | ./ipc/ti_platforms_evmDRA7XX_dsp2/messageq_single.xe66 | /lib/firmware/dra7-dsp2-fw.xe66 |
IPU2 | ./ipc/ti_platforms_evmDRA7XX_ipu2/messageq_single.xem4 | /lib/firmware/dra7-ipu2-fw.xem4 |
IPU1 | ./ipc/ti_platforms_evmDRA7XX_ipu1/messageq_single.xem4 | /lib/firmware/dra7-ipu1-fw.xem4 |
DSP1 | ./ipc/ti_platforms_evmDRA7XX_dsp1/messageq_single.xe66 | /lib/firmware/dra7-dsp1-fw.xe66 |
Boot the target and ensure that the lad daemon is running. Use the ps
command to check if the lad daemon is already running. If not, please start the lad daemon.
target # /usr/bin/lad_dra7xx
Start the MessageQApp
target # /usr/bin/MessageQApp <number of messages> <core id>
The core id to be used is 1,2,3,4 for IPU2,IPU1,DSP2 and DSP1 respectively.
target # MessageQApp 10 3 Using numLoops: 10; procId : 3 Entered MessageQApp_execute Local MessageQId: 0x80 Remote queueId [0x30080] Exchanging 10 messages with remote processor DSP2... MessageQ_get #1 Msg = 0xb6400468 MessageQ_get #2 Msg = 0xb6400468 MessageQ_get #3 Msg = 0xb6400468 MessageQ_get #4 Msg = 0xb6400468 MessageQ_get #5 Msg = 0xb6400468 MessageQ_get #6 Msg = 0xb6400468 MessageQ_get #7 Msg = 0xb6400468 MessageQ_get #8 Msg = 0xb6400468 MessageQ_get #9 Msg = 0xb6400468 MessageQ_get #10 Msg = 0xb6400468 Exchanged 10 messages with remote processor DSP2 Sample application successfully completed! Leaving MessageQApp_execute
RPMsg client sample application[edit]
RPMsg is the kernel space IPC and the building block for the user space MessageQ IPC API. The below wiki page illustrates how to build and run an rpmsg Linux kernel space client to communicate with a slave processor (e.g. DSP, IPU, etc) using IPC's RPMsg module.
RPMsg_Kernel_Client_Application
Running basic Wifi tests[edit]
To run this test, you would need to have the Wilink COM module connected to the EVM.
Check if the wlan0 interface is showing up:
target # ifconfig -a
Bring up the wlan0 inteface:
target # ifconfig wlan0 up
Search for the available Wifi networks:
target # iw wlan0 scan | grep -i ssid
Running basic Bluetooth tests[edit]
To run this test, you would need to have the Wilink COM module connected to the EVM. Make sure that this module supports the Bluetooth.
target # hciconfig hci0 up target # hciconfig -a
Turn on the Bluetooth on the device that you want to pair and make it discoverable, then run the following command:
target # hcitool scan
How to bring up the GNSS driver and sample application[edit]
WL8 GNSS driver that is compatible with SDK is now available as part of the click wrap license at the following location. http://www.ti.com/tool/wilink-sw
Users are requested to register and obtain the package.
The package contains the driver source and the required documentation.
The document "Bring up manual for WiLink8 GNSS driver on Linux" is the starting point that contains the instructions for compiling and trying the sample application.
NOTE: These instructions are known to work if the user starts with the Processor SDK Linux Automotive installer and compiles the Linux kernel using the instructions provided in the Software Developers Guide.