NOTICE: The Processors Wiki will End-of-Life on January 15, 2021. It is recommended to download any files or other content you may need that are hosted on processors.wiki.ti.com. The site is now set to read only.

DM36x H.264 encoder FAQ

From Texas Instruments Wiki
Jump to: navigation, search

DM36x H.264 encoder specific FAQ

For more DM36x codec FAQ, See also: [DM36x Codec FAQ http://processors.wiki.ti.com/index.php/DM365_Codecs_FAQ]



Does encoder support non multiple of 16 resolution?[edit]

Yes, H.264 encoder supports encoding of non multiple of 16 resolution by putting relevant frame cropping information in the bitstream. One needs to appropriately set the below parameters at various API structures to enable encoding of non multiple of 16 resolution.

IVIDEO1_BufDescIn

Variables described below controls the buffer allocation for input buffer

frameWidth XDAS_Int32 Input Width of the video frame.

Note: It will be same as inputWidth for width multiple of 16. For inputWidth non-multiple of 16, application will set this field to next multiple of 16.
frameHeight XDAS_Int32 Input Height of the video frame.

Note:
Progressive: It will be same as inputHeight for height multiple of 16.For inputHeight non-multiple of 16, application will set this field to next multiple of 16.

Interlaced: It will be same as inputHeight for height multiple of 32.For inputHeight non-multiple of 32, application will set this field to next multiple of 32.
framePitch XDAS_Int32 Input Frame pitch used to store the frame.
This field is not used by the encoder.

This means that the input frame buffer given to the encoder should always be multiple of 16 or 32(as applicable). It should not be a problem as it is possible to configure the capture driver to operate on a multiple of 16 pitch. We can then direct the encoder to code a portion of that buffer which could be a non multiple of 16. It is recommended that the application does appropriate padding so that the pixels lying outside the non multiple of 16 width are not uninitialized values. If appropriate padding is not done, it will not affect encoding functionality but the video quality at the border MBs may get affected.

The encoder only supports right and bottom cropping. Hence, the input buffer pointer should always point to the first valid pixel to be encoded. This is not a restriction as sub-window which needs left and top cropping information can always be rearranged to make it part of right cropping and passing the first valid pixel as the input pointer.

IVIDENC1_DynamicParams

Variables described below control the width and height to be coded in the bitstream.

  inputHeight XDAS_Int32 Input Height of input frame in pixels. Input height can be changed before start of encoding within the limits of maximum height set in creation phase. inputHeight must be multiple of two. Minimum height supported is 96. For enc_quality = 2, minimum height is 128. Below that encoder comes out with an error. Irrespective of interlaced or progressive content, input height should be given as frame height.

Note:

Progressive: When the input height is a non-multiple of 16, the encoder expects the application to pad the input frame to the nearest multiple of 16 at the bottom of the frame. In this case, the application should set input height to actual height but should provide the padded input YUV data buffer to encoder. The encoder then sets the difference of the actual height and padded height as crop information in the bit-stream.

Interlaced: When the input height is a non-multiple of 32, the encoder expects the application to pad the input frame to the nearest multiple of 32 at the bottom of the frame. In this case, the application should set input height to actual height but should provide the padded input YUV data buffer to encoder. The encoder then sets the difference of the actual height and padded height as crop information in the bit-stream.
  inputWidth XDAS_Int32 Input Width of input frame in pixels. Input width can be changed before the start of encoding within the limits of maximum width set in creation phase.
inputWidth must be multiples of two. Minimum width supported by encoder is 128.
For encQuality = 2 minimum width supported is 320. For any width lesser than 320, the encoder returns an error.

Note: When the input width is a non-multiple of 16, the encoder expects the application to pad the input frame to the nearest multiple of 16 to the right of the frame. In this case, application should set inputWidth to actual width but should provide the padded input YUV data buffer to encoder. The encoder then sets the difference of the actual width and padded width as crop information in the bit-stream.
  captureWidth XDAS_Int32 Input Capture width parameter enables the application to provide input buffers with different line width (pitch) alignment than input width.

For progressive content, if the parameter is set to:
  • 0 - Encoded input width is used as pitch.
  • >= encoded input width - capture width is used as pitch.
    For interlaced content, captureWidth should be equal to the pitch/stride value needed to move to the next row of pixel in the same field.
    Default value = 0


Notes:

  • When inputHeight/inputWidth are non-multiples of 16, encoder expects the application to pad the input frame to the nearest multiple of 16 at the bottom/right of the frame. In this case, application sets the inputHeight/inputWidth to the actual height/actual width; however, it should provide the padded input YUV data buffer to the encoder.
  • When inputWidth is non-multiple of 16, the encoder expects capture width as padded width(nearest multiple of 16). If the capture width is 0, then the capture width is assumed to be the padded width. In all other cases, the capture width provided through input parameter is used for input frame processing.

How to use interlaced encoding in H.264 encoder?[edit]

For interlaced input, we need to take care of the below scenario - It does not matter to the codec if the input fields are stored in separate buffers or is in the same buffer in an interleaved manner.


  1. IVIDENC1_Params->inputContentType = IVIDEO_INTERLACED
  2. IVIDENC1_DynamicParams->CaptureWidth -> This field specifies the stride to move from one row to the next within the same field. In case the input content is in separate buffer, its value should be same as inputWidth(or set to 0, because if set to 0, captureWidth is assumed to be same as inputWidth). In case the fields are in same buffer in interleaved manner, captureWidth = 2 x inputWidth.
  3. The processes/encode call of the encoder is at a field level. To encode one frame(top and bottom field), we need to have two process call, one for each field. Process call for top field should have the start pointer of the top field in IVIDEO1_BufDescIn *inBufs whereas for bottom fields, the start pointer of bottom field should be passes in IVIDEO1_BufDescIn *inBufs. The application has to make arrangements to give the correct start pointer of the corresponding field.

How to change resolution dynamically in H.264 encoder?[edit]

The change in resolution can be done in the below way –

  1. process () API with Res 1 <Decided to change resolution>
  2. Invoke Control() API with XDM_RESET
  3. Control() API - XDM_SET_PARAMS with new set of resolution – Res 2
  4. process () API with Res 2

Note

  • Res1 and res 2 should be within limit of maxWidth and maxHeight specified at the time of create().
  • With codec version 2.00.00.07 and beyond, control() API with XDM reset need not be called. Just calling control() API with SET_PARAMS for new resolution would result in change in resolution. As expected, change in resolution will automatically force an IDR frame with new SPS and PPS.

How to change rate control mode, frame rate, bitrate, dynamically in H.264 encoder?[edit]

The change in rate control mode, frame rate, bitrate can be done in the below way –

  1. process () API <Decided to change RC parameter after this frame>
  2. Control() API - XDM_SET_PARAMS with new set of RC parameters
  3. process () API - The frame will be coded with new RC parameter

In normal mode, change in any RC related parameter will automatically force an IDR frame with new SPS and PPS. In case IDR frame is not desired when bitrate/framerate is changed, then user must set appropriate value of enableVUIparams. Please see userguide for details on "enableVUIparams".


How to force Intra frames in H.264 encoder?[edit]

It can be done using forceFrame parameter of dynamicParams.

Set the following:

  1. SET dynamicParams.forceFrame = IVIDEO_IDR_FRAME;
  2. Call VIDENC1_control() for XDM_SETPARAMS. This will set Force IDR Frame parameter
  3. call VIDENC1_process(). This will generate an IDR frame
  4. SET dynamicParams.forceFrame = IVIDEO_NA_FRAME;
  5. Call VIDENC1_control() for XDM_SETPARAMS. This will set the original parameters for encoding and remove force IDR frame.
  6. call VIDENC1_process(). This will resume normal encoding

How to generate SPS and PPS headers in H.264 encoder?[edit]

It can be done using generateHeader parameter of dynamicParams.

Set the following:

  1. SET dynamicParams.generateHeader = XDM_GENERATE_HEADER;
  2. Call VIDENC1_control() for XDM_SETPARAMS. This will set header generation mode
  3. call VIDENC1_process(). This will generate SPS and PPS. No frame data will be encoded
  4. SET dynamicParams.forceFrame = XDM_ENCODE_AU;
  5. Call VIDENC1_control() for XDM_SETPARAMS. This will set the original parameters for encoding.
  6. call VIDENC1_process(). This will resume normal encoding

How to insert user data SEI message in H.264 bitstream?[edit]

User data SEI message can be inserted in H.264 bitstream using extended parameters of IVIDENC1_InArgs and IVIDENC1_OutArgs. The parameter details are listed below -

InArgs
  •  
Type Name Value(Range)
Bool insertUserData 0 = Do not insert user data, 1 = Insert user data
UINT32 lenghtUserData >0 (bytes). when insertUserData = 1
=0, when insertUserData = 0
Error cases
>0 but insertUserData = 0, codec will assume that no user data needs to be inserted
=0 but insertUserData = 1, codec will assume that no user data needs to be inserted
OutArgs
  •  
Type Name Value(Range)
INT32 offsetUserData >=0 (bytes), Valid offset value when insertUserData = 1
=-1 , Value set by codec when insertUserData = 0, no space for user data insertion
The offset(bytes) is with respect to the output buffer where the encoded frame is dumped after the process() call. Application should move to this offset and put the user data of lengthUserData.
  • User data is inserted using "user_data_unregistered" SEI message.
  • For inserting user data information, codec adds start code and other relevant markers. But it does NOT actually insert user data in the stream. It only creates empty space in the bitstream and passes back the relevant pointer. The application can then fill the specific userdata at the empty space provided in the bitsteam.
  • The application shouldnot include the user data start code or marker header size when sending the user data size to codec. This means that SEI NAL unit start code for H.264, NAL header(1byte), SEI type(user data unresitered) and payload size(enocoded based on length of user data provided by app) should NOT be added in the length of user data.
  • The pointer passed back by the codec corresponds to the start of the syntax D.1.6 of the H.264 specification.
  • Codec assumes that the size of the user data provided includes the 16 byte "uuid_iso_iec_11578". This also means that any usable user data has to be of length greater than 16.
  • Codec adds dummy byte 0xDD...(16 bytes) for uuid and 0x0C for user data payload byte.



How to insert picture timing SEI message in H.264 encoder?[edit]

The DM36x H.264 Encoder supports insertion of frame time-stamp through the Supplemental Enhancement Information (SEI) Picture Timing message. The time-stamp is useful for audio-synchronization and determining the exact timing for display of frames. The parameters coded in the SEI Picture Timing Message are also useful for testing HRD compliance.

The application should take proper care while setting the parameters for time-stamp and the actual time-stamp for each frame. Ideally, the time-stamp can be set based on the frame-rate. This simplifies the process of generating time-stamps. However, the application is free to use any method of time-stamp generation.

The following API variable effect the time stamp generation –

  • dynamicParams->VUI_Buffer->timeScale
  • dynamicParams->VUI_Buffer->numUnitsInTicks
  • IH264VENC_InArgs->timeStamp

Time-stamp based on frame-rate can be generated as follows.

Let f be the frame-rate of the sequence. Assuming a constant frame-rate sequence, set

TimeScale = k * f

NumUnitsinTicks = n

where k is an integer such that (k * f) and (k/n) are integers

units_per_frame = k/n

For the first frame, set the TimeStamp parameter in inArgs structure to 0. For the subsequent frames, increment the TimeStamp by units_per_frame

Example 1.

f = 30.

Let k = 2

TimeScale = 2 * 30 = 60

NumUnitInTicks = 1

units_per_frame = 2

TimeStamp = 0, 2, 4, 6, 8…


Example 2.

f = 25

k = 2

TimeScale = 2 * 25 = 50

NumUnitsInTicks = 2

units_per_frame = 1

TimeStamp = 0, 1, 2, 3, 4…

Example 3.

f = 15

k = 1000

TimeScale = 1000 * 15 = 15000

NumUnitsInTicks = 1000

units_per_frame = 1

TimeStamp = 0, 1, 2, 3, 4…

Example 4.

f = 0.5

k = 200

TimeScale = 200 * 0.5 = 100

NumUnitsinTicks = 100

units_per_frame = 2

TimeStamp = 0, 2, 4, 6, 8


Encoder sends the time stamp difference between the current frame and previous frame through cpb_removal_delay parameter of the picture timing SEI message. The length of cpb_removal_delay is specified by cpb_removal_delay_length_minus1. Maximum value taken by cpb_removal_delay will be (2^( cpb_removal_delay_length_minus1+1))-1. Once it reaches the maximum value, it will reset and start from zero. In this way timestamp overflow problem is taken care at the encoder even if IH264VENC_InArgs->timeStamp provided by application overflows.


How to insert custom VUI message in H.264 bitstream?[edit]

VUI information contains HRD parameters which can be used for HRD compliance. Codec adds relevant VUI parameter when enableVUIparams is set to 1. If application wants to insert its own VUI parameters, it can be done by setting enableVUIparams to 4. Now application must provide a pointer to VUI parameters buffer through dynamicParams. This is explained in detail in user guide. It is very likely that the application might be interested in updating only a few parameters. In this case the application has to do the following

  1. Set the VUI parameter buffer with the default values provided by codec.
  2. Update the respective fields with the desired value.

For instance application might want to send the timing information only and doesn't want to update any other information. This can be done as below

dynamicParams->VUI_Buffer = &VUIPARAMBUFFER;   // This sets default values to all the VUI parameters in the structure
                                                            // VUIPARAMBUFFER is declared in ih264venc.h file, contains default value of the VUI params

dynamicParams->VUI_Buffer->timeScale = 80;      // Updated to new value

dynamicParams->VUI_Buffer->numUnitsInTicks = 2; // Updated to new value 

How to enable getting MV/SAD information in the encoder?[edit]

Please see the wiki topic


Is there a "cache setting" recommendation while running encoder/decode in the application ?[edit]

It is highly recommended that one follows the below settings -

  • The codec MemTab[] buffers which are needed at the time of codec create should be allocated from cached region. This is to improve performance of the codec.
  • It is mandatory for the I/O buffers provided at the time of process call(inBufs and outBufs) to be non-cached. This is to maintain DMA/cache coherency

How to use DM36x H.264 Encoder Low latency call back APIs for achieving slice level captue and encode ?[edit]

The below steps illustrates how to use DM365 H.264 encoder low latency API for achieving slice level captue and encode syncronization. It is assumed that slice level capture is already implemented in the capture driver.


  • Step 1: Prerequisites

The parameter details of Low latency API are in the user guide refer section 4.5(in conjunction with 4.2.1 and 4.2.2). A sample application is also part of encoder sample test application. Please see h264encoderapp.c, look for #define LOW_LATENCY. If slice based capture driver is implemented, the other work is to synchronize encode and capture driver. Idea is that encode should not run ahead of capture. It can be achieved in below way.


  • Step 2: Configuring encoder

Assume slice size as 2MB row. Set encoder with below parameters –

 IH264VENC_Params->sliceMode = 3
 IH264VENC_Params->outputDataMode = 0
 IH264VENC_Params->sliceFormat = 1 (assuming byte stream encoding)
 IH264VENC_DynamicParams->sliceSize = 2
 IH264VENC_InArgs-> numOutputDataUnits = 1

This will enable encoder to produce slice of 2MB row and the Low latency call back API will get called after 1 slice encode for data exchange.


  • Step 3: Syncronization

If the encoder is run in the above mode, the application will see the call back function getting invoked after 1 slice encode. Application can use this call back API for synchronization as well as data exchange. If the next 2MB row of data is put into DDR by capture driver, application can give the next output slice pointer to the codec and release the call back. This will make encoder proceed with further encoding. Please note that we use output slice buffer pointer of encoded bitstream rather than input YUV to control the encoder. The input YUV pointer is give at the start of process call only as in the normal encoding.


E2e.jpg {{
  1. switchcategory:MultiCore=
  • For technical support on MultiCore devices, please post your questions in the C6000 MultiCore Forum
  • For questions related to the BIOS MultiCore SDK (MCSDK), please use the BIOS Forum

Please post only comments related to the article DM36x H.264 encoder FAQ here.

Keystone=
  • For technical support on MultiCore devices, please post your questions in the C6000 MultiCore Forum
  • For questions related to the BIOS MultiCore SDK (MCSDK), please use the BIOS Forum

Please post only comments related to the article DM36x H.264 encoder FAQ here.

C2000=For technical support on the C2000 please post your questions on The C2000 Forum. Please post only comments about the article DM36x H.264 encoder FAQ here. DaVinci=For technical support on DaVincoplease post your questions on The DaVinci Forum. Please post only comments about the article DM36x H.264 encoder FAQ here. MSP430=For technical support on MSP430 please post your questions on The MSP430 Forum. Please post only comments about the article DM36x H.264 encoder FAQ here. OMAP35x=For technical support on OMAP please post your questions on The OMAP Forum. Please post only comments about the article DM36x H.264 encoder FAQ here. OMAPL1=For technical support on OMAP please post your questions on The OMAP Forum. Please post only comments about the article DM36x H.264 encoder FAQ here. MAVRK=For technical support on MAVRK please post your questions on The MAVRK Toolbox Forum. Please post only comments about the article DM36x H.264 encoder FAQ here. For technical support please post your questions at http://e2e.ti.com. Please post only comments about the article DM36x H.264 encoder FAQ here.

}}

Hyperlink blue.png Links

Amplifiers & Linear
Audio
Broadband RF/IF & Digital Radio
Clocks & Timers
Data Converters

DLP & MEMS
High-Reliability
Interface
Logic
Power Management

Processors

Switches & Multiplexers
Temperature Sensors & Control ICs
Wireless Connectivity