Writing V4L2 Media Controller Applications on Dm36x Video Capture

From Texas Instruments Wiki
Jump to: navigation, search

Writing Media Controller Applications for Video Capture on DM36X

Media Controller Basics

Media Controller is a framework for developing V4L2 based drivers which, compared the earlier implementation of plain V4L2 has a broader view device architecture. While the plain v4L2 had a view of the device as a plain DMA based image drabber which connects the input data to the host memory, the Media Controller takes into consideration the fact that a typical video device might consist of multiple sub-devices in form of either sensors, video decoders, video encoders working in tandem, and also the fact that image grabbing and display is not a simple DMA but might consist of smaller sub-blocks which might do some processing like resizing, format conversion etc. The document below explains these changes in design philosophy and elaborates on the nuts and bolts that make the media Controller as it is today. It also dwells upon the DM365 specifics implementation details for the VPFE capture driver.

Media Device

A Media device is the umbrella device under which multiple sub-entities called Media entities can be accessed, modified and worked upon. The Media Device is exposed to the user in form of a device file, which can be opened to enumerate, set and get the parameters of each of the media entities. For example, in DM365 implementation the entire VPFE capture device with its IPIPE, IPIPEIF, CCDC etc is exposed as a Media Device - /dev/media0. If there were a display driver, it would be exposed as a Media device too.

Media Entity

A Media entity is a sub-block of a a particular media device which usually does a particular function, can be though about as an connect-able but self-contained block which might have a register set f its own for setting the parameters and can be programed in an independent way. This could be a sub-IP, or a helping device on the same board which offloads a particular function like RAW Capture, YUV Capture, filter etc. On DM365 all the sensors, Video Decoders are media entities and the core itself has been modeled with CCDC, Previewer, Resizer, H3A, AEW as entities. These could be enumerated in the standard V4L2 way using the Media device as the enumerating device. Each of these entities has one or more input and output pads, and is connect-able to another entity through a 'link', between the pads.

Sub-devices

Conceptually similar to a media Entity, a subdevice is viewed as sub-block of a V4L2 Video device which is independently configurable through its own set of file operations. The file operations are exposed through V4L2 -like IOCTLS particular to subdevs.Each sub-device is exposed to the user level through device files starting with “subdev-*”. User applications need to configure V4L2 related settings like format, crop,, size parameters through these device handles for each of the sub-devices to make the work in tandem. Structurally, there is almost an one-to-one correspondence between a Media Entity and a sub-device.

Subdev.png

Pads

“Pads” are input and output connect-able points of a Media Entity. Depending on the number of connections the entity can have the pads are pre-fixed in the driver. Typically, a device like a sensor or a video decoder would have only an output pad since it only feeds video into the system, and a /dev/video pad would be modeled as an input pad since it is the end of the stream. The other entities like Resizer, previewer would have typically an input and an output pad and sometimes more depending on the capability.

Link

A link is a 'connection' between pads of different entities. These links can be set, get and enumerated through the Media Device. The application, for proper working of a driver is responsible for setting up of the links properly so that the driver understands the source and destination of the video data.

Entity Graph

An entity graph is the complete setup of different entities, pads, and the links. For proper working of the software, an entity graph should be properly setup, and the driver, before it can start streaming will validate for the proper graph and the appropriate settings on the sub-device to get to know the intent of the application. Choice of if the video input is RAW BAYER or YUV video is determined by the appropriate setup of the entity graph.

The DM365 supports the following fixed entity and pad configuration.


Entities.png

Modes of Operation

Continuous Mode Operation

Continuous Mode refers to the configuration where the input data stream is regulated by the standard capture format like NTSC/720p etc and where the data is input from an external source t be stored int DDR. Here the input data is “streamed” to the DDR and thereby the exchange of buffers between drivers and applications happen on a continuous basis regulated by the interrupts generated at every field or frame. So, the sink is invariably a video node whereas the source is an external sensor or decoder or an inbuilt ADC. The streaming mode supported here is the standard V4L2 mode of streaming.

Continuous.png

Single Shot Mode Operation

As opposed to “continuous mode”, “single-shot mode” is referred to setup where the data input is from DDR as opposed to an external source, and the output is DDR again. So here both the source and sinks are video nodes. To facilitate this kind of interface, the input DDR source is mapped to a separate Video device, which is only used in case of single shot modes. Here the input video node supports a source media device pad, and the output video node supports a sink media device pad.

The driver here supports buffer exchanges on input and output using the standard V4L2 calls, with an exception that the present driver supports a single buffer at a time. In DM365, the list of Single shot drivers is as follows:


  • Resizer
  • Previewer
  • Previewer + Resizer configuration


Singleshot.png

Steps for writing a Media Controller application



Initial set of pre-defined strings and PAD identification numbers for each entity.

/* Media entity names */
#define E_VIDEO_CCDC_OUT_NAME	"DAVINCI VIDEO CCDC output"
#define E_VIDEO_PRV_OUT_NAME	"DAVINCI VIDEO PRV output"
#define E_VIDEO_PRV_IN_NAME	"DAVINCI VIDEO PRV input"
#define E_VIDEO_RSZ_OUT_NAME	"DAVINCI VIDEO RSZ output"
#define E_VIDEO_RSZ_IN_NAME	"DAVINCI VIDEO RSZ input"
#define E_TVP514X_NAME		"tvp514x"
#define E_TVP7002_NAME		"tvp7002"
#define E_MT9P031_NAME		"mt9p031"
#define E_CCDC_NAME		"DAVINCI CCDC"
#define E_PRV_NAME		"DAVINCI PREVIEWER"
#define E_RSZ_NAME		"DAVINCI RESIZER"
#define E_AEW_NAME		"DAVINCI AEW"
#define E_AF_NAME		"DAVINCI AF"

/* pad id's as enumerated by media device*/
#define P_RSZ_SINK	0 /* sink pad of rsz */
#define P_RSZ_SOURCE	1 /* source pad of rsz */
#define P_PRV_SINK	0
#define P_PRV_SOURCE	1
#define P_RSZ_VID_OUT	0 /* only one pad for video node */
#define P_RSZ_VID_IN	0 /* only one pad for video node */
#define P_PRV_VID_IN	0
#define P_PRV_VID_OUT	0
#define P_TVP514X	0 /* only one pad for decoder */
#define P_TVP7002	0 /* only one pad for decoder */
#define P_MT9P031	0 /* only one pad for sensor */
#define P_CCDC_SINK	0 /* sink pad of ccdc */
#define P_CCDC_SOURCE	1 /* source pad which connects video node */
#define P_VIDEO		0 /* only one input pad for video node */
#define P_AEW		0
#define P_AF		0


Open the Media Device

	/* 3.open media device */
	media_fd = open("/dev/media0", O_RDWR);
	if (media_fd < 0) {
		printf("%s: Can't open media device %s\n", __func__, "/dev/media0");
		goto cleanup;
	}

Enumerate media-entities. Here it is a good idea to store the indices of the entities so they can be addressed using index next time.

     
       #define ENTITY_COUNT  15
       struct media_entity_desc entity[ENTITY_COUNT];
       ......
	/* 4.enumerate media-entities */
	printf("4.enumerating media entities\n");
	index = 0;
	do {
		memset(&entity[index], 0, sizeof(struct media_entity_desc));
		entity[index].id = index | MEDIA_ENT_ID_FLAG_NEXT;

		ret = ioctl(media_fd, MEDIA_IOC_ENUM_ENTITIES, &entity[index]);
		if (ret < 0) {
			if (errno == EINVAL)
				break;
		}else {
			if (!strcmp(entity[index].name, E_VIDEO_CCDC_OUT_NAME)) {
				E_VIDEO =  entity[index].id;
			}
			else if (!strcmp(entity[index].name, E_MT9P031_NAME)) {
				E_MT9P031 =  entity[index].id;
			}
			else if (!strcmp(entity[index].name, E_CCDC_NAME)) {
				E_CCDC =  entity[index].id;
			}
			printf("[%x]:%s\n", entity[index].id, entity[index].name);
		}

		index++;
	} while (ret == 0 && index < ENTITY_COUNT);
	entities_count = index;
	printf("total number of entities: %x\n", entities_count);

Enumerate all the links and pads. This is just as an information.This step is optional.

 
	/* 5.enumerate links for each entity */
	printf("5.enumerating links/pads for entities\n");
	
        links.pads = malloc(sizeof( struct media_pad_desc) * entity[index].pads);
	links.links = malloc(sizeof(struct media_link_desc) * entity[index].links);
	
        for(index = 0; index < entities_count; index++) {

		links.entity = entity[index].id;

		ret = ioctl(media_fd, MEDIA_IOC_ENUM_LINKS, &links);
		if (ret < 0) {
			if (errno == EINVAL)
				break;
		}else{
			/* display pads info first */
			if(entity[index].pads)
				printf("pads for entity %x=", entity[index].id);

			for(i = 0;i< entity[index].pads; i++)
			{
				printf("(%x, %s) ", links.pads->index,(links.pads->flags & MEDIA_PAD_FL_INPUT)?"INPUT":"OUTPUT");
				links.pads++;
			}

			printf("\n");

			/* display links now */
			for(i = 0;i< entity[index].links; i++)
			{
				printf("[%x:%x]-------------->[%x:%x]",links.links->source.entity,
				       links.links->source.index,links.links->sink.entity,links.links->sink.index);
				       if(links.links->flags & MEDIA_LNK_FL_ENABLED)
						printf("\tACTIVE\n");
				       else
						printf("\tINACTIVE \n");

				links.links++;
			}

			printf("\n");
		}
	}

Enable the appropriate links. It is about connecting the pads in such a way that fulfills the application's need for Video Capture. If RAW capture is needed, MT9P031 can be linked, If processed YUV is needed, TVP514x link is enabled. Here we are assuming RAW BAYER and hence MT9P031.

 

	/* 6. enable 'mt9p031-->ccdc' link */
	printf("6. ENABLEing link [tvp7002]----------->[ccdc]\n");
	memset(&link, 0, sizeof(link));

	link.flags |=  MEDIA_LNK_FL_ENABLED;
	link.source.entity = E_MT9P031;
	link.source.index = P_MT9P031;
	link.source.flags = MEDIA_PAD_FL_OUTPUT;

	link.sink.entity = E_CCDC;
	link.sink.index = P_CCDC_SINK;
	link.sink.flags = MEDIA_PAD_FL_INPUT;

	ret = ioctl(media_fd, MEDIA_IOC_SETUP_LINK, &link);
	if(ret) {
		printf("failed to enable link between tvp7002 and ccdc\n");
		goto cleanup;
	} else
		printf("[tvp7002]----------->[ccdc]\tENABLED\n");
 
	/* 7. enable 'ccdc->memory' link */
	printf("7. ENABLEing link [ccdc]----------->[video_node]\n");
	memset(&link, 0, sizeof(link));

	link.flags |=  MEDIA_LNK_FL_ENABLED;
	link.source.entity = E_CCDC;
	link.source.index = P_CCDC_SOURCE;
	link.source.flags = MEDIA_PAD_FL_OUTPUT;

	link.sink.entity = E_VIDEO;
	link.sink.index = P_VIDEO;
	link.sink.flags = MEDIA_PAD_FL_INPUT;

	ret = ioctl(media_fd, MEDIA_IOC_SETUP_LINK, &link);
	if(ret) {
		printf("failed to enable link between ccdc and video node\n");
		goto cleanup;
	} else
		printf("[ccdc]----------->[video_node]\t ENABLED\n");

	printf("**********************************************\n");


Now that all the links are set properly, It is time to open the capture device.

 
	/* 14.open capture device */
	if ((capt_fd = open("/dev/video0", O_RDWR | O_NONBLOCK, 0)) <= -1) {
		printf("failed to open %s \n", "/dev/video0");
		goto cleanup;
	}

Enumerate the inputs. For camera,

 
	/* 15.enumerate inputs supported by capture*/
	printf("15.enumerating INPUTS\n");
	bzero(&input, sizeof(struct v4l2_input));
	input.type = V4L2_INPUT_TYPE_CAMERA;
	input.index = 0;
	index = 0;
  	while (1) {

		ret = ioctl(capt_fd, VIDIOC_ENUMINPUT, &input);
		if(ret != 0)
			break;

		printf("[%x].%s\n", index, input.name);

		bzero(&input, sizeof(struct v4l2_input));
		index++;
		input.index = index;
  	}
  	

Set Camera as the input.

 
	/* 16.setting CAMERA as input */
	printf("16. setting CAMERA as input. . .\n");
	bzero(&input, sizeof(struct v4l2_input));
	input.type = V4L2_INPUT_TYPE_CAMERA;
	input.index = 0;
	if (-1 == ioctl (capt_fd, VIDIOC_S_INPUT, &input.index)) {
		printf("failed to set CAMERA with capture device\n");
		goto cleanup;
	} else
		printf("successfully set CAMERA input\n");


Set the FORMAT on the output PAD of MT9P031. Here we are opening the appropriate sub-device node to do this. In this example, we know the sub-device number. However, it might not be known always.

 
	/* 8. set format on pad of mt9p031 */
	mt9p_fd = open("/dev/v4l-subdev0", O_RDWR);
	if(mt9p_fd == -1) {
		printf("failed to open %s\n", "/dev/v4l-subdev0");
		goto cleanup;
	}

	printf("8. setting format on pad of mt9p031 entity. . .\n");
	memset(&fmt, 0, sizeof(fmt));

	fmt.pad = P_MT9P031;
	fmt.which = V4L2_SUBDEV_FORMAT_ACTIVE;
	fmt.format.code = CODE;
	fmt.format.width = width;
	fmt.format.height = height;
	fmt.format.field = V4L2_FIELD_NONE;

	ret = ioctl(mt9p_fd, VIDIOC_SUBDEV_S_FMT, &fmt);
	if(ret) {
		printf("failed to set format on pad %x\n", fmt.pad);
		goto cleanup;
	}
	else
		printf("successfully format is set on pad %x\n", fmt.pad);

Similarly, set the format on the sink-pad of CCDC ( Sink : input , Source : output ) for a given entity.

	/* 9. set format on sink-pad of ccdc */
	ccdc_fd = open("/dev/v4l-subdev1", O_RDWR);
	if(ccdc_fd == -1) {
		printf("failed to open %s\n", "/dev/v4l-subdev2");
		goto cleanup;
	}
	/* set format on sink pad of ccdc */
	printf("12. setting format on sink-pad of ccdc entity. . .\n");
	memset(&fmt, 0, sizeof(fmt));

	fmt.pad = P_CCDC_SINK;
	fmt.which = V4L2_SUBDEV_FORMAT_ACTIVE;
	fmt.format.code = CODE;
	fmt.format.width = width;
	fmt.format.height = height;
	fmt.format.field = V4L2_FIELD_NONE;

	ret = ioctl(ccdc_fd, VIDIOC_SUBDEV_S_FMT, &fmt);
	if(ret) {
		printf("failed to set format on pad %x\n", fmt.pad);
		goto cleanup;
	}
	else
		printf("successfully format is set on pad %x\n", fmt.pad);

Now, set the format on the Source pad of CCDC which is connected to Video node.

	/* 13. set format on source-pad of ccdc */
	printf("13. setting format on OF-pad of ccdc entity. . . \n");
	memset(&fmt, 0, sizeof(fmt));

	fmt.pad = P_CCDC_SOURCE;
	fmt.which = V4L2_SUBDEV_FORMAT_ACTIVE;
	fmt.format.code = CODE;
	fmt.format.width = width;
	fmt.format.height = height;
	fmt.format.colorspace = V4L2_COLORSPACE_SMPTE170M;
	fmt.format.field = V4L2_FIELD_NONE;

	ret = ioctl(ccdc_fd, VIDIOC_SUBDEV_S_FMT, &fmt);
	if(ret) {
		printf("failed to set format on pad %x\n", fmt.pad);
		goto cleanup;
	}
	else
		printf("successfully format is set on pad %x\n", fmt.pad);

Setup the CCDC for proper configuration which might include a host of internal parameters. This is done through a private sub-device IOCTL for CCDC - VPFE_CMD_S_CCDC_RAW_PARAMS

 
	/* 10. get ccdc raw params from ccdc*/
	printf("10. getting RAW params from ccdc\n");
	
	bzero(&raw_params, sizeof(raw_params));

	if (-1 == ioctl(ccdc_fd, VPFE_CMD_G_CCDC_RAW_PARAMS, &raw_params)) {
		printf("failed to get raw params, %p", &raw_params);
		goto cleanup;
	}

	/* 11. set raw params in ccdc */
	printf("11. setting raw params in ccdc\n");
	raw_params.compress.alg = CCDC_NO_COMPRESSION;
	raw_params.gain_offset.gain.r_ye = r;
	raw_params.gain_offset.gain.gr_cy = gr;
	raw_params.gain_offset.gain.gb_g = gb;
	raw_params.gain_offset.gain.b_mg = b;
	raw_params.gain_offset.gain_sdram_en = 1;
	raw_params.gain_offset.gain_ipipe_en = 1;
	raw_params.gain_offset.offset = 0;
	raw_params.gain_offset.offset_sdram_en = 1;

	/* To test linearization, set this to 1, and update the
	 * linearization table with correct data
	 */
	if (linearization_en) {
		raw_params.linearize.en = 1;
		raw_params.linearize.corr_shft = CCDC_1BIT_SHIFT;
		raw_params.linearize.scale_fact.integer = 0;
		raw_params.linearize.scale_fact.decimal = 10;

		for (i = 0; i < CCDC_LINEAR_TAB_SIZE; i++)
			raw_params.linearize.table[i] = i;
	} else {
		raw_params.linearize.en = 0;
	}

	/* csc */
	if (csc_en) {
		raw_params.df_csc.df_or_csc = 0;
		raw_params.df_csc.csc.en = 1;
		/* I am hardcoding this here. But this should
		 * really match with that of the capture standard
		 */
		raw_params.df_csc.start_pix = 1;
		raw_params.df_csc.num_pixels = 720;
		raw_params.df_csc.start_line = 1;
		raw_params.df_csc.num_lines = 480;
		/* These are unit test values. For real case, use
		 * correct values in this table
		 */
		raw_params.df_csc.csc.coeff[0] = csc_coef_val;
		raw_params.df_csc.csc.coeff[1].decimal = 1;
		raw_params.df_csc.csc.coeff[2].decimal = 2;
		raw_params.df_csc.csc.coeff[3].decimal = 3;
		raw_params.df_csc.csc.coeff[4].decimal = 4;
		raw_params.df_csc.csc.coeff[5].decimal = 5;
		raw_params.df_csc.csc.coeff[6].decimal = 6;
		raw_params.df_csc.csc.coeff[7].decimal = 7;
		raw_params.df_csc.csc.coeff[8].decimal = 8;
		raw_params.df_csc.csc.coeff[9].decimal = 9;
		raw_params.df_csc.csc.coeff[10].decimal = 10;
		raw_params.df_csc.csc.coeff[11].decimal = 11;
		raw_params.df_csc.csc.coeff[12].decimal = 12;
		raw_params.df_csc.csc.coeff[13].decimal = 13;
		raw_params.df_csc.csc.coeff[14].decimal = 14;
		raw_params.df_csc.csc.coeff[15].decimal = 15;

	} else {
		raw_params.df_csc.df_or_csc = 0;
		raw_params.df_csc.csc.en = 0;
	}

	/* vertical line defect correction */
	if (vldfc_en) {
		raw_params.dfc.en = 1;
		// correction method
		raw_params.dfc.corr_mode = CCDC_VDFC_HORZ_INTERPOL_IF_SAT;
		// not pixels upper than the defect corrected
		raw_params.dfc.corr_whole_line = 1;
		raw_params.dfc.def_level_shift = CCDC_VDFC_SHIFT_2;
		raw_params.dfc.def_sat_level = 20;
		raw_params.dfc.num_vdefects = 7;
		for (i = 0; i < raw_params.dfc.num_vdefects; i++) {
			raw_params.dfc.table[i].pos_vert = i;
			raw_params.dfc.table[i].pos_horz = i + 1;
			raw_params.dfc.table[i].level_at_pos = i + 5;
			raw_params.dfc.table[i].level_up_pixels = i + 6;
			raw_params.dfc.table[i].level_low_pixels = i + 7;
		}
		printf("DFC enabled\n");
	} else {
		raw_params.dfc.en = 0;
	}

	if (en_culling) {

		printf("Culling enabled\n");
		raw_params.culling.hcpat_odd  = 0xaa;
		raw_params.culling.hcpat_even = 0xaa;
		raw_params.culling.vcpat = 0x55;
		raw_params.culling.en_lpf = 1;
	} else {
		raw_params.culling.hcpat_odd  = 0xFF;
		raw_params.culling.hcpat_even = 0xFF;
		raw_params.culling.vcpat = 0xFF;
	}

	raw_params.col_pat_field0.olop = CCDC_GREEN_BLUE;
	raw_params.col_pat_field0.olep = CCDC_BLUE;
	raw_params.col_pat_field0.elop = CCDC_RED;
	raw_params.col_pat_field0.elep = CCDC_GREEN_RED;
	raw_params.col_pat_field1.olop = CCDC_GREEN_BLUE;
	raw_params.col_pat_field1.olep = CCDC_BLUE;
	raw_params.col_pat_field1.elop = CCDC_RED;
	raw_params.col_pat_field1.elep = CCDC_GREEN_RED;
	raw_params.data_size = CCDC_12_BITS;
	raw_params.data_shift = CCDC_NO_SHIFT;


	if (-1 == ioctl(ccdc_fd, VPFE_CMD_S_CCDC_RAW_PARAMS, &raw_params)) {
		printf("failed to set raw params, %p", &raw_params);
		return -1;
	} else
		printf("successfully set raw params in ccdc\n");


Set the format on the Video node for capture. This format is used to store into the DDR.

 

	/* 17.setting format */
	printf("17. setting format V4L2_PIX_FMT_SBGGR16\n");
	CLEAR(v4l2_fmt);
	v4l2_fmt.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
	v4l2_fmt.fmt.pix.width = width;
	v4l2_fmt.fmt.pix.height = height;
	v4l2_fmt.fmt.pix.pixelformat = V4L2_PIX_FMT_SBGGR16;
	v4l2_fmt.fmt.pix.field = V4L2_FIELD_NONE;

	if (-1 == ioctl(capt_fd, VIDIOC_S_FMT, &v4l2_fmt)) {
		printf("failed to set format on captute device \n");
		goto cleanup;
	} else
		printf("successfully set the format\n");

	/* 15.call G_FMT for knowing picth */
	if (-1 == ioctl(capt_fd, VIDIOC_G_FMT, &v4l2_fmt)) {
		printf("failed to get format from captute device \n");
		goto cleanup;
	} else {
		printf("capture_pitch: %x\n", v4l2_fmt.fmt.pix.bytesperline);
		capture_pitch = v4l2_fmt.fmt.pix.bytesperline;
	}


Request for buffers. This is the standard V4L2 procedure of doing REQ_BUFS

 
	/* 18.make sure 3 buffers are supported for streaming */
	printf("18. Requesting for 3 buffers\n");
	CLEAR(req);
	req.count = 3;
	req.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
	req.memory = V4L2_MEMORY_USERPTR;

	if (-1 == ioctl(capt_fd, VIDIOC_REQBUFS, &req)) {
		printf("call to VIDIOC_REQBUFS failed\n");
		goto cleanup;
	}

	if (req.count != 3) {
		printf("3 buffers not supported by capture device");
		goto cleanup;
	} else
		printf("3 buffers are supported for streaming\n");

	

Initial set of pre-queing of buffers so that as soon as we start on streaming we can do the DQ-Q cycle. A minimum of 3 buffers are needed for minimum effective running of V4L2 capture. One buffer remains with the driver for current capture, one can be held with application and the other in a queued state so that a buffer is ready for next frame capture. This has do do with the sync nature of the hardware operation where the registers are shadowed and take effect at every VSYNC.

 

	/* 19.queue the buffers */
	printf("19. queing buffers\n");
	for (i = 0; i < 3; i++) {
		struct v4l2_buffer buf;
		CLEAR(buf);
		buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
		buf.memory = V4L2_MEMORY_USERPTR;
		buf.index = i;
		buf.length = buf_size;
		buf.m.userptr = (unsigned long)capture_buffers[i].user_addr;

		if (-1 == ioctl(capt_fd, VIDIOC_QBUF, &buf)) {
			printf("call to VIDIOC_QBUF failed\n");
			goto cleanup;
		}
	}

Start Streaming!!

 
	/* 20.start streaming */
	CLEAR(type);
	type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
	if (-1 == ioctl(capt_fd, VIDIOC_STREAMON, &type)) {
		printf("failed to start streaming on capture device");
		goto cleanup;
	} else
		printf("streaming started successfully\n");

The DQ-Q cycle.

 
	while(frame_count != 5) {

		CLEAR(cap_buf);

		cap_buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
		cap_buf.memory = V4L2_MEMORY_USERPTR;
try_again:
		ret = ioctl(capt_fd, VIDIOC_DQBUF, &cap_buf);
		if (ret < 0) {
			if (errno == EAGAIN) {
				goto try_again;
			}
			printf("failed to DQ buffer from capture device\n");
			goto cleanup;
		}

		temp = cap_buf.m.userptr;
		source = (char *)temp;

		/* copy frame to a file */
		for(i=0 ; i < height; i++) {
			fwrite(source, 1 , width*2, file);
			source += capture_pitch;
		}

		/* Q the buffer for capture, again */
		ret = ioctl(capt_fd, VIDIOC_QBUF, &cap_buf);
		if (ret < 0) {
			printf("failed to Q buffer onto capture device\n");
			goto cleanup;
		}

		frame_count++;

	}

Once done, make sure to do Stream-off to indicate the stop for the capture.

 

	/* 21. do stream off */
	CLEAR(type);
	type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
	if (-1 == ioctl(capt_fd, VIDIOC_STREAMOFF, &type)) {
		printf("failed to stop streaming on capture device");
		goto cleanup;
	} else
		printf("streaming stopped successfully\n");

	

Start with relinquishing the linkages so a new/different set of linkages can be established for the next application.

 	
cleanup:
 /* 24. de-enable all the links which are active right now */
	for(index = 0; index < entities_count; index++) {

		links.entity = entity[index].id;

		links.pads = malloc(sizeof( struct media_pad_desc) * entity[index].pads);
		links.links = malloc(sizeof(struct media_link_desc) * entity[index].links);

		ret = ioctl(media_fd, MEDIA_IOC_ENUM_LINKS, &links);
		if (ret < 0) {
			if (errno == EINVAL)
				break;
		}else{

			for(i = 0;i< entity[index].links; i++)
			{
				       if(links.links->flags & MEDIA_LNK_FL_ENABLED) {
					        /* de-enable the link */
					        memset(&link, 0, sizeof(link));

						link.flags |=  ~MEDIA_LNK_FL_ENABLED;
						link.source.entity = links.links->source.entity;
						link.source.index = links.links->source.index;
						link.source.flags = MEDIA_PAD_FL_OUTPUT;

						link.sink.entity = links.links->sink.entity;
						link.sink.index = links.links->sink.index;
						link.sink.flags = MEDIA_PAD_FL_INPUT;

						ret = ioctl(media_fd, MEDIA_IOC_SETUP_LINK, &link);
						if(ret) {
							printf("failed to de-enable link \n");
						}

				       }

				links.links++;
			}
		}
	}

Close the file Descriptors

 
	/* 25.close all the file descriptors */
	printf("closing all the file descriptors. . .\n");
	if(capt_fd) {
		close(capt_fd);
		printf("closed capture device\n");
	}
	if(ccdc_fd) {
		close(ccdc_fd);
		printf("closed ccdc sub-device\n");
	}
	if(mt9p_fd) {
		close(mt9p_fd);
		printf("closed mt9p031 sub-device\n");
	}
	if(media_fd) {
		close(media_fd);
		printf("closed  media device\n");
	}
	if(file) {
		fclose(file);
		printf("closed the file \n");
	}
	return ret;
}