Transcoding Test and Demo

From Texas Instruments Wiki
Jump to: navigation, search

Overview

This wiki is part of a "cloud HPC" series showing how to use c66x co-CPU™ cards in commodity servers to achieve real-time, high capacity processing and analytics of multiple concurrent streams of media, signals and other data.

The focus of this wiki is to demonstrate high capacity transcoding on c66x co-CPU cards using the mediaTest program. mediaTest is an application in the cloud HPC co-CPU software model (see diagram below), providing a media transcoding test environment. mediaTest can be used in two basic modes:

  • RTP mode -- pcap files are used for session real-time input/output
  • Diagnostic mode -- audio data or compressed data files are used for session input/output

The following sections describe mediaTest functionality, session configuration files, and API interface and example source code.

Other wiki's in the cloud HPC series include:

Capabilities

  • Multiple concurrent session configuration using command-line specified session config files
  • Packet statistics printout
  • CPU usage, mem usage, and other diagnostic printout

RTP Mode Capabilities

  • Operate with live streams or pcap files (using pcap player)
  • Packet buffering performed by host using PCIe or SRIO interfaces to c66x CPUs
  • Packet buffering handled by co-CPU card network I/O (i.e. onboard network I/O)
  • In all cases, packets streams contain IPv4 and IPv6 UDP/RTP packets, with session create coherence maintained by hashing performed on c66x cores

Diagnostic Mode Capabilities

  • Encoder only, decoder only, and combined modes
  • Performance benchmarking and other measurements, including CPU, heap, and stack usage
  • Audio file formats supported include .wav, .pcm, .tim, and various raw (no header) formats
  • Compressed data file formats supported include .cod

DirectCore® Summary

DirectCore® is the library API to which mediaTest interfaces (see software model diagram below). The DirectCore wiki has detailed information; here is a brief summary:

  • DirectCore libraries and drivers abstract all c66x cores as a "unified pool" of cores, allowing multiple users / VM instances to share c66x resources, including NICs on PCIe cards or ATCA blades. This applies regardless of the number of cards or blades installed. Both a physical "back end" driver and virtual "front end" drivers are used
  • APIs are fully concurrent between applications. The physical driver automatically maximizes PCIe bus bandwidth across multiple c66x CPUs
  • APIs are mostly synchronous; also synchronous mailbox APIs are supported. In PCIe based platforms, APIs use inbound transactions and shared memory between host and guest CPUs (for example DPDK) use outbound transactions
  • DirectCore allows true multiuser operation, without time-slicing or batch jobs. Multiple host and VM instances can allocate and utilize c66x resources concurrently. For example, multiple mediaTest instances can run concurrently

Software Model

Below is a diagram showing where DirectCore libs and drivers fit in the cloud HPC co-CPU software architecture.

 

HPC co-CPU software model diagram

 

Some notes about the above diagram:

  • Application complexity increases from left to right (command line, open source library APIs, user code APIs, heterogeneous programming)
  • All application types can run concurrently in host or VM instances (see below for VM configuration)
  • c66x CPUs can make direct DMA access to host memory, facilitating use of DPDK. Host CPU memory DMA capability can also used to share data between c66x CPUs, for example in an application such as H.265 (HEVC) encoding, where 10s of cores must work concurrently on the same data set
  • c66x CPUs are connected directly to the network. Received packets are filtered by UDP port and distributed to c66x cores at wire speed

Session Config Files

Below is an example session configuration file that sets up two (2) sessions:

# Session 0
 
session=0
term1.local_ip=10.0.1.211
term1.local_port=10240
term1.remote_ip=10.0.1.71
term1.remote_port=10240
term1.media_type=voice
term1.codec_type=EVS
term1.bitrate=13200
term1.ptime=20
term1.rtp_payload_type=127
term1.dtmf_type=NONE
term1.dtmf_payload_type=NONE
term1.evs_sample_rate=0
term1.evs_header_full=0
 
term2.local_ip=10.0.1.211
term2.local_port=10242 
term2.remote_ip=10.0.1.71
term2.remote_port=10242
term2.media_type=voice
term2.codec_type=G711_ULAW
term2.bitrate=64000
term2.ptime=20
term2.rtp_payload_type=0
term2.dtmf_type=NONE
term2.dtmf_payload_type=NONE
 
corelist=0x01
 
# Session 1
 
session=1
term1.local_ip=10.0.1.211
term1.local_port=10244
term1.remote_ip=10.0.1.71
term1.remote_port=10244
term1.media_type=voice
term1.codec_type=EVS
term1.bitrate=13200
term1.ptime=20
term1.rtp_payload_type=127
term1.dtmf_type=NONE
term1.dtmf_payload_type=NONE
term1.evs_sample_rate=0
term1.evs_header_full=0
 
term2.local_ip=10.0.1.211
term2.local_port=10246 
term2.remote_ip=10.0.1.71
term2.remote_port=10246
term2.media_type=voice
term2.codec_type=G711_ULAW
term2.bitrate=64000
term2.ptime=20
term2.rtp_payload_type=0
term2.dtmf_type=NONE
term2.dtmf_payload_type=NONE
 
corelist=0x01

Although the above session config file shows only two sessions as a brief wiki example, any number of sessions can be configured in the file.

Session API Examples

Session create and tear-down API example C code is shown below, using mailbox APIs provided by DirectCore. Note that mailbox communication between host and guest CPUs is asynchronous, compared to most DirectCore APIs which are synchronous.

 

/* CreateSessions() - set up sessions
   returns -1 on error, otherwise returns number of sessions created
*/ 
 
int CreateSessions(HCARD hCard, PMEDIAPARAMS mediaParams, stack_t* session_stack, QWORD nCoreList) {
 
uint32_t trans_id = 0xabab;
uint32_t size;
char* tx_buffer = (char*)calloc(TRANS_MAILBOX_MAX_PAYLOAD_SIZE, sizeof(char));
session_params_t params;
char default_config_file[] = "session_config/test_config";
char* config_file;
 
   memset(&params, 0, sizeof(params));
 
   FILE* cfg_fp = NULL;
 
   if (strlen(mediaParams->configFilename) == 0 || (access(mediaParams->configFilename, F_OK ) == -1)) {
 
      printf("Specified config file: %s does not exist, using default file\n", mediaParams->configFilename);
      config_file = default_config_file;
   }
   else config_file = mediaParams->configFilename;
 
   printf("Opening session config file: %s\n", config_file);
 
   cfg_fp = fopen(config_file, "r");
 
   /* read test_config and send session data to corresponding c66x CPU cores */
 
   while (parse_session_params(cfg_fp, &params) != -1) {
 
      size = prepare_session_creation(tx_buffer, global_session_id, params);
 
      if (!DSWriteMailbox(hCard, (uint8_t*)tx_buffer, size, trans_id++, nCoreList)) {
 
         printf("ERROR: failed to send session create command for session %d on cores %llx\n", global_session_id, nCoreList);
         return -1;
      }
 
   /* add session to stack */
 
      stack_t* session_entry = (stack_t*)calloc(1, sizeof(stack_t));
      session_entry->id = global_session_id;
      session_entry->param = params;
      session_entry->next = session_stack;
      session_entry->nCoreList = nCoreList;
      session_stack = session_entry;
 
      printf("session creation command sent to cores %llx with session_id %d\n", session_stack->nCoreList, global_session_id);
      global_session_id++;
      memset(&params, 0, sizeof(params));
   }
 
   fclose(cfg_fp);
   free(tx_buffer);
   return global_session_id;
}
 
/* DeleteSessions() - tear down sessions
   returns -1 on error, otherwise returns number of sessions deleted
*/
 
int DeleteSessions(HCARD hCard, stack_t* session_stack) {
 
uint32_t trans_id = 0xabab;
uint32_t size;
char* tx_buffer = (char*)calloc(TRANS_MAILBOX_MAX_PAYLOAD_SIZE, sizeof(char));
 
/* close all sessions & output files */
 
   while (session_stack != NULL) {
 
      size = prepare_session_deletion(tx_buffer, session_stack->id);
 
      if (!DSWriteMailbox(hCard, (uint8_t*)tx_buffer, size, trans_id++, session_stack->nCoreList)) {
 
         printf("ERROR: failed to send session delete command, error code = %d\n", DSGetAPIErrorStatus());
         return -1;
      }
 
      printf("deleting session %d on cores %llx\n", session_stack->id, session_stack->nCoreList);
      stack_t* temp = session_stack->next;
      free(session_stack);
      session_stack = temp;
   }
 
   free(tx_buffer);
   return 0;
}

API Header File Excerpts

A few of the relevant struct definitions are given below. These structs are shared between x86 and c66x co-CPUs.

 

typedef struct {
 
  uint32_t term_id;
  uint32_t media_type : 8;  /* see media_type enums */
  uint32_t codec_type : 8;  /* use voice_codec_type enums or video_codec_type enum s(if media_type is VOICE or VIDEO) */
  uint32_t vqe_processing_interval : 16;
 
  uint32_t bitrate;     /* bps */
 
  struct ip_addr remote_ip;
  struct ip_addr local_ip;
  uint32_t remote_port : 16;
  uint32_t local_port : 16;
 
  struct jitter_buffer_config jb_config;
 
  union {
     struct voice_attributes voice_attr;
     struct video_attributes video_attr;
  } attr;
 
} TERM_INFO;
 
typedef struct {
 
   uint32_t session_id;
   uint32_t HA_index;   /* ha_index = 0 (see comments in session.h about this element's value for cases of N+1, 1+1, or High Availability) */
 
   TERM_INFO term1;
   TERM_INFO term2;
 
} SESSION_DATA;
 
typedef struct {
 
   SESSION_DATA session_data;
   QWORD nCoreList;
 
} session_params_t;
 
typedef struct {
 
   int id;
   session_params_t param;
   stack_t* next;
} stack_t;

 

Note that a number of struct definitions above are not shown, including voice and video attributes, jitter buffer configuration, echo can attributes, DTMF configuration, video streaming params, etc. To obtain a copy of all header files, please contact Signalogic.

mediaTest Screen Captures

Below is a screen capture showing mediaTest running EVS 8 kHz 7.2 kbps in diagnostic mode (encoder + decoder).

 

mediaTest diagnostic mode screen capture

 

Installing / Configuring VMs

Below is a screen capture showing VM configuration for c66x co-CPU™ cards, using the Ubuntu Virtual Machine Manager (VMM) user interface:

VMM dialog showing VM configuration for c66x co-CPU cards

c66x core allocation is transparent to the number of PCIe cards installed in the system; just like installing memory DIMMs of different sizes, c66x cards can be mixed and matched.