NOTICE: The Processors Wiki will End-of-Life on January 15, 2021. It is recommended to download any files or other content you may need that are hosted on processors.wiki.ti.com. The site is now set to read only.

Networked encode and decode demos

From Texas Instruments Wiki
(Redirected from NetworkedEncodeDecode)
Jump to: navigation, search

Network Encode and Decode Demos for DVEVM/DVSDK 1.2[edit]

How to create Server Encode Demo from encode demo ?[edit]

The original encode demo, documentation of which is provided at [1], is modified.

Changes in the main function:[edit]

A. Two additional options for the command line arguments are introduced which are. 1. -S to specify that the program should act like a server and it requires an argument specifying the standard of the compression to be used for encoding the video, which should be the extension of the standard to be used. For Example

./server -S .264

2. -p to specify to the program the number of the port to be used to allow establishment of the network connect with the client.

./server -S .264 -p 9000

B. An additional member called 'port' has been added to the Args structure defined in the main.c file to receive the arguments from the user.

C. Starting up the server:

1. First a temporary server socket is created using the given port and the function defined in the network_utils.c called 'createServerSocket'.

2. Wait for the client to connect to the created temporary socket and then as soon as client wants to connect a new session between the server and the client is to be started.

3. New function named 'start_new_session' has been added to the main.c file to start the network server functionality and to establish the connection and to provide the successfully opened socket descriptor to be used for the transmission. In the 'start_new_session' function the server (host) information about the video standard used, width and height of the video is send to the client using the already created temporary socket. A new server socket is created on the new port( which is the port specified by the user plus one) and the new port which is to be used for the transmission of the video between server and the client is send to the client. The temporary socket is closed and the new socket descriptor is returned.

D. Before opening the write thread the write thread environment arguments are set depending on the command line arguments provided by the user. If the video file is provided using the video file option '-v', the videoFile member of the environment is set to the name of the file and the txfd, the new socket descriptor member of the write environment is set to NULL. And if the server command line option '-S' is used along with the port options '-p', the video file is set to NULL and the socket descriptor txfd is set to the socket descriptor of the successfully opened connection

Changes in the capture thread:[edit]

At capture replace the time consuming resizer copy with the normal copying as given below. Resizer copy takes about 10ms, so as we want to make the network demos faster, so we need to do this step. <syntaxhighlight lang='c'> /** Normal Copy without using resizer**/ ce.virtBuf = capBufs[v4l2buf.index].start; </syntaxhighlight> It is pointer assignment and this capture element 'ce' would be passed to the video thread.

Changes in write thread:[edit]

Instead of writing on the video file only, now the write thread should be able to write on both the video file and on the socket descriptor obtained while opening the port specified by the user. To write on the video file the same fwrite function is used and to write on the port sendTCP function defined in the network_utils.c file and prototyped in the network_utils.h file included in the write thread. <syntaxhighlight lang='c'>

/* Write On network*/ if (xmtfd != NULL) {

   if (sendTCP (xmtfd, &we.frameSize, sizeof(int)) == NET_FAILURE) {
       DBG ("failed to send frame\n");
   }
   /* send encoded buffer to the clients */
   if (sendTCP (xmtfd, we.encodedBuffer, we.frameSize) == NET_FAILURE) {
       DBG ("failed to send frame\n");
   }
   DBG ("Sending frame of framesize %d\n", we.frameSize);

} /* Write on file*/ if (outputFp != NULL) {

   if (fwrite(we.encodedBuffer, we.frameSize, 1, outputFp) != 1) {
       ERR("Error writing the encoded data to video file\n");
       breakLoop(THREAD_FAILURE);
   }

} </syntaxhighlight>

Changes in control thread ( optional):[edit]

Some changes are made to the control thread for the better control and termination of the server and client. An additional socket has been opened while a new session is created in the main thread. And the socket descriptor is passed to the ctrl (control) thread environment. The server would use this socket to transmit the control signals telling the client about the status of the server, as soon as it want to terminate the transmission it will change the status of the server/host to TERMINATE and would send it to the client through the control socket, so that the client is not kept waiting for the next frame.


How to create Client Decode Demo from decode demo ?[edit]

The original decode demo, documentation of which is provided at [2], is modified.

Changes in the main.c and main function:[edit]

Include the file "network_utils.h" to add the network functionality in your decode demo.

Add files network_utils.c, network.utils.h and debug.h to the decode demos folder. Edit main.c file to make the necessary changes to call the network functions.


A. Four additional options for the command line arguments are introduced which are:

1. -C to specify that the program should act like a client and it requires an argument specifying the standard of the compression to be used for decoding the video, which should be the extension of the standard to be used. For Example

./client -C .264

2. -p to specify to the program the port number to be used to allow establishment of the network connect with the server.

3. -i to specify the ip address of the server to whom the client need to make connection. For exmaple

./client -C .264 -p 9000 -i 192.168.1.101

4. -b to specify the number of buffers to be used in the display thread. If number of buffers is 1 the video latency would be less but new frame would be over writing the old frame in the same buffer so the video would be of poor quality. While if the number of buffers specified are 2 then video latency would be high ranging from 120ms to 150 ms depending on the network speed but the video would have a reliable quality.

./client -C .264 -p 9000 -i 192.168.1.101 -b 2

Num of Display buffers Latency (ms)
1 80-110
> = 2 120-155

B. Three additional member port, ip and numdisbuf have been added to the Args structure defined in the main.c file to receive the arguments from the user.

C. Starting up the client: New function named 'connect_to_server' has been added to the main.c file to start the network client functionality and to establish the connection with the server and to return the successfully opened socket descriptor to be used for the reception of video data. The sequential steps are given below


1. At first a temporary client socket is created using the given port and the function defined in the network_utils.c called 'createClientSocket'.

2. Connection is established using the opened client socket using the 'connect' function and the host information of the standard, height and width being used are read using the readTCP function defined in the network_utils.c

3. Depending on the match and mismatch of the standard, width and height between client and the server the status of the future connection is send to the server, letting the server know weather to terminate the connection or continue it.

4. In case the status is set to 'ALIVE' then the server port number are read using the temporary client socket and the temporary client socket is closed.

5. On the new port (temporary port + 1) new client socket is opened which is to be used for the future reception of the video and the client is connected to the server, while returning the successful opened socket descriptor to the main function.

D. Before opening the video thread the video thread environment arguments are set depending on the command line arguments provided by the user. If the video file is provided using the video file option '-v', the videoFile member of the environment is set to the name of the file and the rcvfd, the new socket descriptor member of the write environment, is set to NULL. And if the client command line option '-C' is used along with the port options '-p', the video file is set to NULL and the socket descriptor rcvfd is set to the socket descriptor of the successfully opened connection for video reception.

Changes in video thread:[edit]

Two options for reading, one from file and another from the network are introduced, the video thread would read from the file using the functions 'loaderPrime' and 'loaderGetFrame' while on the other hand if you are trying to read from the socket descriptor use the readTCP function defined in the network_utils.c file. As readTCP require the size of the buffer to be read, so before reading the actual buffer one has to read the size of the next transmitted buffer.

<syntaxhighlight lang='c'> if (envp->videoFile) {

   if (loaderPrime(&lState, &framePtr) == FAILURE) {
       cleanup(THREAD_FAILURE);
   }

}

if (envp->rcvFd) {

   if (readTCP (lState.inputFd, &lState.readSize, sizeof(int)) == NET_FAILURE)  {
       DBG ("failed to send frameSize\n");

cleanup(THREAD_FAILURE);

   }
   if (readTCP (lState.inputFd, framePtr, lState.readSize) == NET_FAILURE)  {
       ERR ("Failed to read frame\n");
       cleanup(THREAD_FAILURE);
   }
   printf("\nRead Frame of frame size %d\n", lState.readSize);

} </syntaxhighlight>

Changes in display thread:[edit]

As the numbers of buffers used in the display are now variable, the user can specify the number of buffers and is passed to the display thread using the display thread environment variable numdisbuf.

Remove the resizer copy as it is taking time with normal copying as below

<syntaxhighlight lang='c'>

  1. if 1

{

   int nextline;
   int dst;
   int src;
   src = de.virtBuf;
   dst = virtDisplays[displayIdx];
   for (nextline = 0; nextline < de.height; nextline++) {
       memcpy(dst, src, de.width * 2);
       src += de.width * 2;
       dst += displayPitch;
   }

}

  1. endif

</syntaxhighlight>

Changes in control thread (optional):[edit]

Some changes are made to the control thread for the better control and termination of the server and client. An additional socket has been opened while a connection is made to the server in the main thread. And the socket descriptor is passed to the ctrl (control) thread environment. The client would use this socket to receive the control signals from the server and to transmit the control signals telling the server about the status of the client, as soon as it want to terminate the reception it will change the status of the server/host to TERMINATE and as soon as it receive the termination signal from the server it would close the client application.


Latency:[edit]

Depending on the network condition and mismatch of the clock between the two boards, the latency can vary with time. As the boards are not synchronized so the latency varies over the coarse of the time. For example for the number of display buffers equal to 2, it keep on vibrating between 80 ms and 110 ms of minimum and maximum ranges , respectively.

Num of Display buffers Capture Time (ms) Encode Time (ms) Decode Time (ms) Network Time (ms) Display Time (ms)
1 40 25 15 0-30 0
> = 2 40 25 15 0-30 0-40


The reason for the network variations could be due to the accumulation of the time for the frames having a little bit higher capture time then 40ms and often the encode time is more then 25ms , thus forcing the client decoder to wait for the buffer to appear in the network read buffer.

Attachments:[edit]

The following files are included to provide an example of some simple socket programming helper functions. Please note that the code is kept simple and does not include strong error checking, therefore should not be taken as is for production purposes.

References:[edit]

E2e.jpg {{
  1. switchcategory:MultiCore=
  • For technical support on MultiCore devices, please post your questions in the C6000 MultiCore Forum
  • For questions related to the BIOS MultiCore SDK (MCSDK), please use the BIOS Forum

Please post only comments related to the article NetworkedEncodeDecode here.

Keystone=
  • For technical support on MultiCore devices, please post your questions in the C6000 MultiCore Forum
  • For questions related to the BIOS MultiCore SDK (MCSDK), please use the BIOS Forum

Please post only comments related to the article NetworkedEncodeDecode here.

C2000=For technical support on the C2000 please post your questions on The C2000 Forum. Please post only comments about the article NetworkedEncodeDecode here. DaVinci=For technical support on DaVincoplease post your questions on The DaVinci Forum. Please post only comments about the article NetworkedEncodeDecode here. MSP430=For technical support on MSP430 please post your questions on The MSP430 Forum. Please post only comments about the article NetworkedEncodeDecode here. OMAP35x=For technical support on OMAP please post your questions on The OMAP Forum. Please post only comments about the article NetworkedEncodeDecode here. OMAPL1=For technical support on OMAP please post your questions on The OMAP Forum. Please post only comments about the article NetworkedEncodeDecode here. MAVRK=For technical support on MAVRK please post your questions on The MAVRK Toolbox Forum. Please post only comments about the article NetworkedEncodeDecode here. For technical support please post your questions at http://e2e.ti.com. Please post only comments about the article NetworkedEncodeDecode here.

}}

Hyperlink blue.png Links

Amplifiers & Linear
Audio
Broadband RF/IF & Digital Radio
Clocks & Timers
Data Converters

DLP & MEMS
High-Reliability
Interface
Logic
Power Management

Processors

Switches & Multiplexers
Temperature Sensors & Control ICs
Wireless Connectivity