summaryrefslogtreecommitdiff
path: root/src/ipa/ipu3/ipu3-ipa-design-guide.rst
blob: 89c7110833f40f32f1ba2945e8e7976999a6e46c (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
IPU3 IPA Architecture Design and Overview
=========================================

The IPU3 IPA is built as a modular and extensible framework with an
upper layer to manage the interactions with the pipeline handler, and
the image processing algorithms split to compartmentalise the processing
required for each processing block, making use of the fixed-function
accelerators provided by the ImgU ISP.

The core IPU3 class is responsible for initialisation and construction
of the algorithm components, processing controls set by the requests
from applications, and managing events from the pipeline handler.

::

      ┌───────────────────────────────────────────┐
      │      IPU3 Pipeline Handler                │
      │   ┌────────┐    ┌────────┐    ┌────────┐  │
      │   │        │    │        │    │        │  │
      │   │ Sensor ├───►│  CIO2  ├───►│  ImgU  ├──►
      │   │        │    │        │    │        │  │
      │   └────────┘    └────────┘    └─▲────┬─┘  │    P: Parameter Buffer
      │                                 │P   │    │    S: Statistics Buffer
      │                                 │    │S   │
      └─┬───┬───┬──────┬────┬────┬────┬─┴────▼─┬──┘    1: init()
        │   │   │      │ ▲  │ ▲  │ ▲  │ ▲      │       2: configure()
        │1  │2  │3     │4│  │4│  │4│  │4│      │5      3: mapBuffers(), start()
        ▼   ▼   ▼      ▼ │  ▼ │  ▼ │  ▼ │      ▼       4: processEvent()
      ┌──────────────────┴────┴────┴────┴─────────┐    5: stop(), unmapBuffers()
      │ IPU3 IPA                                  │
      │                 ┌───────────────────────┐ │
      │ ┌───────────┐   │ Algorithms            │ │
      │ │IPAContext │   │          ┌─────────┐  │ │
      │ │ ┌───────┐ │   │          │ ...     │  │ │
      │ │ │       │ │   │        ┌─┴───────┐ │  │ │
      │ │ │  SC   │ │   │        │ Tonemap ├─┘  │ │
      │ │ │       │ ◄───►      ┌─┴───────┐ │    │ │
      │ │ ├───────┤ │   │      │ AWB     ├─┘    │ │
      │ │ │       │ │   │    ┌─┴───────┐ │      │ │
      │ │ │  FC   │ │   │    │ AGC     ├─┘      │ │
      │ │ │       │ │   │    │         │        │ │
      │ │ └───────┘ │   │    └─────────┘        │ │
      │ └───────────┘   └───────────────────────┘ │
      └───────────────────────────────────────────┘
        SC: IPASessionConfiguration
        FC: IPAFrameContext(s)

The IPA instance is constructed and initialised at the point a Camera is
created by the IPU3 pipeline handler. The initialisation call provides
details about which camera sensor is being used, and the controls that
it has available, along with their default values and ranges.

Buffers
~~~~~~~

The IPA will have Parameter and Statistics buffers shared with it from
the IPU3 Pipeline handler. These buffers will be passed to the IPA using
the ``mapBuffers()`` call before the ``start()`` operation occurs.

The IPA will map the buffers into CPU-accessible memory, associated with
a buffer ID, and further events for sending or receiving parameter and
statistics buffers will reference the ID to avoid expensive memory
mapping operations, or the passing of file handles during streaming.

After the ``stop()`` operation occurs, these buffers will be unmapped
when requested by the pipeline handler using the ``unmapBuffers()`` call
and no further access to the buffers is permitted.

Context
~~~~~~~

Algorithm calls will always have the ``IPAContext`` available to them.
This context comprises of two parts:

-  IPA Session Configuration
-  IPA Frame Context

The session configuration structure ``IPASessionConfiguration``
represents constant parameters determined before streaming commenced
during ``configure()``.

The IPA Frame Context provides the storage for algorithms for a single
frame operation.

The ``IPAFrameContext`` structure may be extended to an array, list, or
queue to store historical state for each frame, allowing algorithms to
obtain and reference results of calculations which are deeply pipelined.
This may only be done if an algorithm needs to know the context that was
applied at the frame the statistics were produced for, rather than the
previous or current frame.

Presently there is a single ``IPAFrameContext`` without historical data,
and the context is maintained and updated through successive processing
operations.

Operating
~~~~~~~~~

There are three main interactions with the algorithms for the IPU3 IPA
to operate when running:

-  configure()
-  processEvent(``EventFillParams``)
-  processEvent(``EventStatReady``)

The configuration phase allows the pipeline-handler to inform the IPA of
the current stream configurations, which is then passed into each
algorithm to provide an opportunity to identify and track state of the
hardware, such as image size or ImgU pipeline configurations.

Pre-frame preparation
~~~~~~~~~~~~~~~~~~~~~

When configured, the IPA is notified by the pipeline handler of the
Camera ``start()`` event, after which incoming requests will be queued
for processing, requiring a parameter buffer (``ipu3_uapi_params``) to
be populated for the ImgU. This is given to the IPA through the
``EventFillParams`` event, and then passed directly to each algorithm
through the ``prepare()`` call allowing the ISP configuration to be
updated for the needs of each component that the algorithm is
responsible for.

The algorithm should set the use flag (``ipu3_uapi_flags``) for any
structure that it modifies, and it should take care to ensure that any
structure set by a use flag is fully initialised to suitable values.

The parameter buffer is returned to the pipeline handler through the
``ActionParamFilled`` event, and from there queued to the ImgU along
with a raw frame captured with the CIO2.

Post-frame completion
~~~~~~~~~~~~~~~~~~~~~

When the capture of an image is completed, and successfully processed
through the ImgU, the generated statistics buffer
(``ipu3_uapi_stats_3a``) is given to the IPA through the
``EventStatReady`` event. This provides the IPA with an opportunity to
examine the results of the ISP and run the calculations required by each
algorithm on the new data. The algorithms may require context from the
operations of other algorithms, for example, the AWB might choose to use
a scene brightness determined by the AGC. It is important that the
algorithms are ordered to ensure that required results are determined
before they are needed.

The ordering of the algorithm processing is determined by their
placement in the ``IPU3::algorithms_`` ordered list.

Sensor Controls
~~~~~~~~~~~~~~~

The AutoExposure and AutoGain (AGC) algorithm differs slightly from the
others as it requires operating directly on the sensor, as opposed to
through the ImgU ISP. To support this, there is a dedicated action
`ActionSetSensorControls` to allow the IPA to request controls to be set
on the camera sensor through the pipeline handler.
1' href='#n391'>391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613
/* SPDX-License-Identifier: LGPL-2.1-or-later */
/*
 * Copyright (C) 2018, Google Inc.
 *
 * vimc.cpp - Pipeline handler for the vimc device
 */

#include <algorithm>
#include <iomanip>
#include <map>
#include <math.h>
#include <tuple>

#include <linux/media-bus-format.h>
#include <linux/version.h>

#include <libcamera/base/log.h>
#include <libcamera/base/utils.h>

#include <libcamera/camera.h>
#include <libcamera/control_ids.h>
#include <libcamera/controls.h>
#include <libcamera/formats.h>
#include <libcamera/request.h>
#include <libcamera/stream.h>

#include <libcamera/ipa/ipa_interface.h>
#include <libcamera/ipa/ipa_module_info.h>
#include <libcamera/ipa/vimc_ipa_interface.h>
#include <libcamera/ipa/vimc_ipa_proxy.h>

#include "libcamera/internal/camera_sensor.h"
#include "libcamera/internal/device_enumerator.h"
#include "libcamera/internal/ipa_manager.h"
#include "libcamera/internal/media_device.h"
#include "libcamera/internal/pipeline_handler.h"
#include "libcamera/internal/v4l2_subdevice.h"
#include "libcamera/internal/v4l2_videodevice.h"

namespace libcamera {

LOG_DEFINE_CATEGORY(VIMC)

class VimcCameraData : public CameraData
{
public:
	VimcCameraData(PipelineHandler *pipe, MediaDevice *media)
		: CameraData(pipe), media_(media)
	{
	}

	int init();
	int allocateMockIPABuffers();
	void bufferReady(FrameBuffer *buffer);
	void paramsFilled(unsigned int id);

	MediaDevice *media_;
	std::unique_ptr<CameraSensor> sensor_;
	std::unique_ptr<V4L2Subdevice> debayer_;
	std::unique_ptr<V4L2Subdevice> scaler_;
	std::unique_ptr<V4L2VideoDevice> video_;
	std::unique_ptr<V4L2VideoDevice> raw_;
	Stream stream_;

	std::unique_ptr<ipa::vimc::IPAProxyVimc> ipa_;
	std::vector<std::unique_ptr<FrameBuffer>> mockIPABufs_;
};

class VimcCameraConfiguration : public CameraConfiguration
{
public:
	VimcCameraConfiguration(VimcCameraData *data);

	Status validate() override;

private:
	VimcCameraData *data_;
};

class PipelineHandlerVimc : public PipelineHandler
{
public:
	PipelineHandlerVimc(CameraManager *manager);

	CameraConfiguration *generateConfiguration(Camera *camera,
		const StreamRoles &roles) override;
	int configure(Camera *camera, CameraConfiguration *config) override;

	int exportFrameBuffers(Camera *camera, Stream *stream,
			       std::vector<std::unique_ptr<FrameBuffer>> *buffers) override;

	int start(Camera *camera, const ControlList *controls) override;
	void stop(Camera *camera) override;

	int queueRequestDevice(Camera *camera, Request *request) override;

	bool match(DeviceEnumerator *enumerator) override;

private:
	int processControls(VimcCameraData *data, Request *request);

	VimcCameraData *cameraData(const Camera *camera)
	{
		return static_cast<VimcCameraData *>(
			PipelineHandler::cameraData(camera));
	}
};

namespace {

static const std::map<PixelFormat, uint32_t> pixelformats{
	{ formats::RGB888, MEDIA_BUS_FMT_BGR888_1X24 },
	{ formats::BGR888, MEDIA_BUS_FMT_RGB888_1X24 },
};

} /* namespace */

VimcCameraConfiguration::VimcCameraConfiguration(VimcCameraData *data)
	: CameraConfiguration(), data_(data)
{
}

CameraConfiguration::Status VimcCameraConfiguration::validate()
{
	Status status = Valid;

	if (config_.empty())
		return Invalid;

	if (transform != Transform::Identity) {
		transform = Transform::Identity;
		status = Adjusted;
	}

	/* Cap the number of entries to the available streams. */
	if (config_.size() > 1) {
		config_.resize(1);
		status = Adjusted;
	}

	StreamConfiguration &cfg = config_[0];

	/* Adjust the pixel format. */
	const std::vector<libcamera::PixelFormat> formats = cfg.formats().pixelformats();
	if (std::find(formats.begin(), formats.end(), cfg.pixelFormat) == formats.end()) {
		LOG(VIMC, Debug) << "Adjusting format to BGR888";
		cfg.pixelFormat = formats::BGR888;
		status = Adjusted;
	}

	/* Clamp the size based on the device limits. */
	const Size size = cfg.size;

	/*
	 * The scaler hardcodes a x3 scale-up ratio, and the sensor output size
	 * is aligned to two pixels in both directions. The output width and
	 * height thus have to be multiples of 6.
	 */
	cfg.size.width = std::max(48U, std::min(4096U, cfg.size.width));
	cfg.size.height = std::max(48U, std::min(2160U, cfg.size.height));
	cfg.size.width -= cfg.size.width % 6;
	cfg.size.height -= cfg.size.height % 6;

	if (cfg.size != size) {
		LOG(VIMC, Debug)
			<< "Adjusting size to " << cfg.size.toString();
		status = Adjusted;
	}

	cfg.bufferCount = 4;

	V4L2DeviceFormat format;
	format.fourcc = data_->video_->toV4L2PixelFormat(cfg.pixelFormat);
	format.size = cfg.size;

	int ret = data_->video_->tryFormat(&format);
	if (ret)
		return Invalid;

	cfg.stride = format.planes[0].bpl;
	cfg.frameSize = format.planes[0].size;

	return status;
}

PipelineHandlerVimc::PipelineHandlerVimc(CameraManager *manager)
	: PipelineHandler(manager)
{
}

CameraConfiguration *PipelineHandlerVimc::generateConfiguration(Camera *camera,
	const StreamRoles &roles)
{
	VimcCameraData *data = cameraData(camera);
	CameraConfiguration *config = new VimcCameraConfiguration(data);

	if (roles.empty())
		return config;

	std::map<PixelFormat, std::vector<SizeRange>> formats;

	for (const auto &pixelformat : pixelformats) {
		/*
		 * Kernels prior to v5.7 incorrectly report support for RGB888,
		 * but it isn't functional within the pipeline.
		 */
		if (data->media_->version() < KERNEL_VERSION(5, 7, 0)) {
			if (pixelformat.first != formats::BGR888) {
				LOG(VIMC, Info)
					<< "Skipping unsupported pixel format "
					<< pixelformat.first.toString();
				continue;
			}
		}

		/* The scaler hardcodes a x3 scale-up ratio. */
		std::vector<SizeRange> sizes{
			SizeRange{ { 48, 48 }, { 4096, 2160 } }
		};
		formats[pixelformat.first] = sizes;
	}

	StreamConfiguration cfg(formats);

	cfg.pixelFormat = formats::BGR888;
	cfg.size = { 1920, 1080 };
	cfg.bufferCount = 4;

	config->addConfiguration(cfg);

	config->validate();

	return config;
}

int PipelineHandlerVimc::configure(Camera *camera, CameraConfiguration *config)
{
	VimcCameraData *data = cameraData(camera);
	StreamConfiguration &cfg = config->at(0);
	int ret;

	/* The scaler hardcodes a x3 scale-up ratio. */
	V4L2SubdeviceFormat subformat = {};
	subformat.mbus_code = MEDIA_BUS_FMT_SGRBG8_1X8;
	subformat.size = { cfg.size.width / 3, cfg.size.height / 3 };

	ret = data->sensor_->setFormat(&subformat);
	if (ret)
		return ret;

	ret = data->debayer_->setFormat(0, &subformat);
	if (ret)
		return ret;

	subformat.mbus_code = pixelformats.find(cfg.pixelFormat)->second;
	ret = data->debayer_->setFormat(1, &subformat);
	if (ret)
		return ret;

	ret = data->scaler_->setFormat(0, &subformat);
	if (ret)
		return ret;

	if (data->media_->version() >= KERNEL_VERSION(5, 6, 0)) {
		Rectangle crop{ 0, 0, subformat.size };
		ret = data->scaler_->setSelection(0, V4L2_SEL_TGT_CROP, &crop);
		if (ret)
			return ret;
	}

	subformat.size = cfg.size;
	ret = data->scaler_->setFormat(1, &subformat);
	if (ret)
		return ret;

	V4L2DeviceFormat format;
	format.fourcc = data->video_->toV4L2PixelFormat(cfg.pixelFormat);
	format.size = cfg.size;

	ret = data->video_->setFormat(&format);
	if (ret)
		return ret;

	if (format.size != cfg.size ||
	    format.fourcc != data->video_->toV4L2PixelFormat(cfg.pixelFormat))
		return -EINVAL;

	/*
	 * Format has to be set on the raw capture video node, otherwise the
	 * vimc driver will fail pipeline validation.
	 */
	format.fourcc = V4L2PixelFormat(V4L2_PIX_FMT_SGRBG8);
	format.size = { cfg.size.width / 3, cfg.size.height / 3 };

	ret = data->raw_->setFormat(&format);
	if (ret)
		return ret;

	cfg.setStream(&data->stream_);

	if (data->ipa_) {
		/* Inform IPA of stream configuration and sensor controls. */
		std::map<unsigned int, IPAStream> streamConfig;
		streamConfig.emplace(std::piecewise_construct,
				     std::forward_as_tuple(0),
				     std::forward_as_tuple(cfg.pixelFormat, cfg.size));

		std::map<unsigned int, ControlInfoMap> entityControls;
		entityControls.emplace(0, data->sensor_->controls());

		IPACameraSensorInfo sensorInfo;
		data->sensor_->sensorInfo(&sensorInfo);

		data->ipa_->configure(sensorInfo, streamConfig, entityControls);
	}

	return 0;
}

int PipelineHandlerVimc::exportFrameBuffers(Camera *camera, Stream *stream,
					    std::vector<std::unique_ptr<FrameBuffer>> *buffers)
{
	VimcCameraData *data = cameraData(camera);
	unsigned int count = stream->configuration().bufferCount;

	return data->video_->exportBuffers(count, buffers);
}

int PipelineHandlerVimc::start(Camera *camera, [[maybe_unused]] const ControlList *controls)
{
	VimcCameraData *data = cameraData(camera);
	unsigned int count = data->stream_.configuration().bufferCount;

	int ret = data->video_->importBuffers(count);
	if (ret < 0)
		return ret;

	/* Map the mock IPA buffers to VIMC IPA to exercise IPC code paths. */
	std::vector<IPABuffer> ipaBuffers;
	for (auto [i, buffer] : utils::enumerate(data->mockIPABufs_)) {
		buffer->setCookie(i + 1);
		ipaBuffers.emplace_back(buffer->cookie(), buffer->planes());
	}
	data->ipa_->mapBuffers(ipaBuffers);

	ret = data->ipa_->start();
	if (ret) {
		data->video_->releaseBuffers();
		return ret;
	}

	ret = data->video_->streamOn();
	if (ret < 0) {
		data->ipa_->stop();
		data->video_->releaseBuffers();
		return ret;
	}

	return 0;
}

void PipelineHandlerVimc::stop(Camera *camera)
{
	VimcCameraData *data = cameraData(camera);
	data->video_->streamOff();

	std::vector<unsigned int> ids;
	for (const std::unique_ptr<FrameBuffer> &buffer : data->mockIPABufs_)
		ids.push_back(buffer->cookie());
	data->ipa_->unmapBuffers(ids);
	data->ipa_->stop();

	data->video_->releaseBuffers();
}

int PipelineHandlerVimc::processControls(VimcCameraData *data, Request *request)
{
	ControlList controls(data->sensor_->controls());

	for (auto it : request->controls()) {
		unsigned int id = it.first;
		unsigned int offset;
		uint32_t cid;

		if (id == controls::Brightness) {
			cid = V4L2_CID_BRIGHTNESS;
			offset = 128;
		} else if (id == controls::Contrast) {
			cid = V4L2_CID_CONTRAST;
			offset = 0;
		} else if (id == controls::Saturation) {
			cid = V4L2_CID_SATURATION;
			offset = 0;
		} else {
			continue;
		}

		int32_t value = lroundf(it.second.get<float>() * 128 + offset);
		controls.set(cid, std::clamp(value, 0, 255));
	}

	for (const auto &ctrl : controls)
		LOG(VIMC, Debug)
			<< "Setting control " << utils::hex(ctrl.first)
			<< " to " << ctrl.second.toString();

	int ret = data->sensor_->setControls(&controls);
	if (ret) {
		LOG(VIMC, Error) << "Failed to set controls: " << ret;
		return ret < 0 ? ret : -EINVAL;
	}

	return ret;
}

int PipelineHandlerVimc::queueRequestDevice(Camera *camera, Request *request)
{
	VimcCameraData *data = cameraData(camera);
	FrameBuffer *buffer = request->findBuffer(&data->stream_);
	if (!buffer) {
		LOG(VIMC, Error)
			<< "Attempt to queue request with invalid stream";

		return -ENOENT;
	}

	int ret = processControls(data, request);
	if (ret < 0)
		return ret;

	ret = data->video_->queueBuffer(buffer);
	if (ret < 0)
		return ret;

	data->ipa_->processControls(request->sequence(), request->controls());

	return 0;
}

bool PipelineHandlerVimc::match(DeviceEnumerator *enumerator)
{
	DeviceMatch dm("vimc");

	dm.add("Raw Capture 0");
	dm.add("Raw Capture 1");
	dm.add("RGB/YUV Capture");
	dm.add("Sensor A");
	dm.add("Sensor B");
	dm.add("Debayer A");
	dm.add("Debayer B");
	dm.add("RGB/YUV Input");
	dm.add("Scaler");

	MediaDevice *media = acquireMediaDevice(enumerator, dm);
	if (!media)
		return false;

	std::unique_ptr<VimcCameraData> data = std::make_unique<VimcCameraData>(this, media);

	/* Locate and open the capture video node. */
	if (data->init())
		return false;

	data->ipa_ = IPAManager::createIPA<ipa::vimc::IPAProxyVimc>(this, 0, 0);
	if (!data->ipa_) {
		LOG(VIMC, Error) << "no matching IPA found";
		return false;
	}

	data->ipa_->paramsFilled.connect(data.get(), &VimcCameraData::paramsFilled);

	std::string conf = data->ipa_->configurationFile("vimc.conf");
	data->ipa_->init(IPASettings{ conf, data->sensor_->model() });

	/* Create and register the camera. */
	std::set<Stream *> streams{ &data->stream_ };
	std::shared_ptr<Camera> camera =
		Camera::create(this, data->sensor_->id(), streams);
	registerCamera(std::move(camera), std::move(data));

	return true;
}

int VimcCameraData::init()
{
	int ret;

	ret = media_->disableLinks();
	if (ret < 0)
		return ret;

	MediaLink *link = media_->link("Debayer B", 1, "Scaler", 0);
	if (!link)
		return -ENODEV;

	ret = link->setEnabled(true);
	if (ret < 0)
		return ret;

	/* Create and open the camera sensor, debayer, scaler and video device. */
	sensor_ = std::make_unique<CameraSensor>(media_->getEntityByName("Sensor B"));
	ret = sensor_->init();
	if (ret)
		return ret;

	debayer_ = V4L2Subdevice::fromEntityName(media_, "Debayer B");
	if (debayer_->open())
		return -ENODEV;

	scaler_ = V4L2Subdevice::fromEntityName(media_, "Scaler");
	if (scaler_->open())
		return -ENODEV;

	video_ = V4L2VideoDevice::fromEntityName(media_, "RGB/YUV Capture");
	if (video_->open())
		return -ENODEV;

	video_->bufferReady.connect(this, &VimcCameraData::bufferReady);

	raw_ = V4L2VideoDevice::fromEntityName(media_, "Raw Capture 1");
	if (raw_->open())
		return -ENODEV;

	ret = allocateMockIPABuffers();
	if (ret < 0) {
		LOG(VIMC, Warning) << "Cannot allocate mock IPA buffers";
		return ret;
	}

	/* Initialise the supported controls. */
	const ControlInfoMap &controls = sensor_->controls();
	ControlInfoMap::Map ctrls;

	for (const auto &ctrl : controls) {
		const ControlId *id;
		ControlInfo info;

		switch (ctrl.first->id()) {
		case V4L2_CID_BRIGHTNESS:
			id = &controls::Brightness;
			info = ControlInfo{ { -1.0f }, { 1.0f }, { 0.0f } };
			break;
		case V4L2_CID_CONTRAST:
			id = &controls::Contrast;
			info = ControlInfo{ { 0.0f }, { 2.0f }, { 1.0f } };
			break;
		case V4L2_CID_SATURATION:
			id = &controls::Saturation;
			info = ControlInfo{ { 0.0f }, { 2.0f }, { 1.0f } };
			break;
		default:
			continue;
		}

		ctrls.emplace(id, info);
	}

	controlInfo_ = ControlInfoMap(std::move(ctrls), controls::controls);

	/* Initialize the camera properties. */
	properties_ = sensor_->properties();

	return 0;
}

void VimcCameraData::bufferReady(FrameBuffer *buffer)
{
	Request *request = buffer->request();

	/* If the buffer is cancelled force a complete of the whole request. */
	if (buffer->metadata().status == FrameMetadata::FrameCancelled) {
		for (auto it : request->buffers()) {
			FrameBuffer *b = it.second;
			b->cancel();
			pipe_->completeBuffer(request, b);
		}

		pipe_->completeRequest(request);
		return;
	}

	/* Record the sensor's timestamp in the request metadata. */
	request->metadata().set(controls::SensorTimestamp,
				buffer->metadata().timestamp);

	pipe_->completeBuffer(request, buffer);
	pipe_->completeRequest(request);

	ipa_->fillParams(request->sequence(), mockIPABufs_[0]->cookie());
}

int VimcCameraData::allocateMockIPABuffers()
{
	constexpr unsigned int kBufCount = 2;

	V4L2DeviceFormat format;
	format.fourcc = video_->toV4L2PixelFormat(formats::BGR888);
	format.size = Size (160, 120);

	int ret = video_->setFormat(&format);
	if (ret < 0)
		return ret;

	return video_->exportBuffers(kBufCount, &mockIPABufs_);
}

void VimcCameraData::paramsFilled([[maybe_unused]] unsigned int id)
{
}

REGISTER_PIPELINE_HANDLER(PipelineHandlerVimc)

} /* namespace libcamera */