summaryrefslogtreecommitdiff
path: root/src/v4l2/v4l2_compat_manager.cpp
AgeCommit message (Collapse)Author
2021-06-25libcamera/base: Move extended base functionalityKieran Bingham
Move the functionality for the following components to the new base support library: - BoundMethod - EventDispatcher - EventDispatcherPoll - Log - Message - Object - Signal - Semaphore - Thread - Timer While it would be preferable to see these split to move one component per commit, these components are all interdependent upon each other, which leaves us with one big change performing the move for all of them. Reviewed-by: Hirokazu Honda <hiroh@chromium.org> Reviewed-by: Paul Elder <paul.elder@ideasonboard.com> Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2021-06-25libcamera/base: Move utils to the base libraryKieran Bingham
Move the utils functionality to the libcamera/base library. Reviewed-by: Hirokazu Honda <hiroh@chromium.org> Reviewed-by: Paul Elder <paul.elder@ideasonboard.com> Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2021-05-18v4l2: Replace manual loop counters with utils::enumerate()Laurent Pinchart
Use the newly introduced utils::enumerate() to replace manual loop counters. A local variable is needed, as utils::enumerate() requires an lvalue reference. Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Reviewed-by: Niklas Söderlund <niklas.soderlund@ragnatech.se> Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2020-06-25v4l2: V4L2CameraProxy: Take V4L2CameraFile as argument for intercepted callsPaul Elder
Prepare for using the V4L2CameraFile as a container for file-specific information in the V4L2 compatibility layer by making it a required argument for all V4L2CameraProxy calls that are directed from V4L2CompatManager, which are intercepted via LD_PRELOAD. Change V4L2CameraFile accordingly. Also change V4L2CompatManager accordingly. Instead of keeping a map of file descriptors to V4L2CameraProxy instances, we keep a map of V4L2CameraFile instances to V4L2CameraProxy instances. When the proxy methods are called, feed the file as a parameter. The dup function is also modified, in that it is removed from V4L2CameraProxy, and is handled completely in V4L2CompatManager, as a map from file descriptors to V4L2CameraFile instances. Signed-off-by: Paul Elder <paul.elder@ideasonboard.com> Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2020-06-19v4l2: v4l2_compat: Intercept open64, openat64, and mmap64Paul Elder
Some applications (eg. Firefox, Google Chrome, Skype) use open64, openat64, and mmap64 instead of their non-64 versions that we currently intercept. Intercept these calls as well. _LARGEFILE64_SOURCE needs to be set so that the 64-bit symbols are available and not synonymous to the non-64-bit versions on 64-bit systems. Also, since we set _FILE_OFFSET_BITS to 32 to force the various open and mmap symbols that we export to not be the 64-bit versions, our dlsym to get the original open and mmap calls will not automatically be converted to their 64-bit versions. Since we intercept both 32-bit and 64-bit versions of open and mmap, we should be using the 64-bit version to service both. Fetch the 64-bit versions of openat and mmap directly. musl defines the 64-bit symbols as macros that are equivalent to the non-64-bit symbols, so we put compile guards that check if the 64-bit symbols are defined. Signed-off-by: Paul Elder <paul.elder@ideasonboard.com> Reviewed-by: Niklas Söderlund <niklas.soderlund@ragnatech.se> Tested-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> # Compile with musl Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2020-06-08v4l2: v4l2_compat: Add eventfd signaling to support pollingPaul Elder
To support polling, we need to be able to signal when data is available to be read (POLLIN), as well as events (POLLPRI). Add the necessary calls to eventfd to allow signaling POLLIN. We signal POLLIN by writing writing to the eventfd, and clear it by reading from the eventfd, upon VIDIOC_DQBUF. Note that eventfd does not support signaling POLLPRI, so we don't yet support V4L2 events. Signed-off-by: Paul Elder <paul.elder@ideasonboard.com> Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2020-05-28v4l2: Relicense V4L2 compatibility layer under LGPLLaurent Pinchart
The V4L2 compatibility layer is licensed under the GPL. It is compiled as a binary separate from libcamera.so, and is loaded into the address space of processes through LD_PRELOAD to intercept calls to the C library. It is our understanding and intent that the GPL license doesn't propagate to the binaries whose calls are intercepted, considering those binaries are not derivative work of the V4L2 compatibility layer and are not designed to be linked to the V4L2 compatibility layer. There is however a possibly grey area if binaries are packaged with a shell script wrapper that loads the V4L2 compatibility layer. This could lead to license-related issues if such packaging is performed by Linux distributions or system integrators. To clarify the intent and lift the doubts, relicense the V4L2 compatibility layer under the LGPL. The V4L2 compatibility layer code itself still benefits from the license protection, while its usage with third-party binaries is clearly allowed as intended. Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Acked-by: Jacopo Mondi <jacopo@jmondi.org> Acked-by: Kieran Bingham <kieran.bingham@ideasonboard.com> Acked-by: Paul Elder <paul.elder@ideasonboard.com> Acked-by: Niklas Söderlund <niklas.soderlund@ragnatech.se> Acked-by: Nicolas Dufresne <nicolas.dufresne@collabora.com>
2020-05-16libcamera: Move internal headers to include/libcamera/internal/Laurent Pinchart
The libcamera internal headers are located in src/libcamera/include/. The directory is added to the compiler headers search path with a meson include_directories() directive, and internal headers are included with (e.g. for the internal semaphore.h header) #include "semaphore.h" All was well, until libcxx decided to implement the C++20 synchronization library. The __threading_support header gained a #include <semaphore.h> to include the pthread's semaphore support. As include_directories() adds src/libcamera/include/ to the compiler search path with -I, the internal semaphore.h is included instead of the pthread version. Needless to say, the compiler isn't happy. Three options have been considered to fix this issue: - Use -iquote instead of -I. The -iquote option instructs gcc to only consider the header search path for headers included with the "" version. Meson unfortunately doesn't support this option. - Rename the internal semaphore.h header. This was deemed to be the beginning of a long whack-a-mole game, where namespace clashes with system libraries would appear over time (possibly dependent on particular system configurations) and would need to be constantly fixed. - Move the internal headers to another directory to create a unique namespace through path components. This causes lots of churn in all the existing source files through the all project. The first option would be best, but isn't available to us due to missing support in meson. Even if -iquote support was added, we would need to fix the problem before a new version of meson containing the required support would be released. The third option is thus the only practical solution available. Bite the bullet, and do it, moving headers to include/libcamera/internal/. Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Acked-by: Jacopo Mondi <jacopo@jmondi.org>
2020-02-13v4l2: Remove internal threadLaurent Pinchart
Now that libcamera creates threads internally and doesn't rely on an application-provided event loop, remove the thread from the V4L2 compatibility layer. The split between the V4L2CameraProxy and V4L2Camera classes is still kept to separate the V4L2 adaptation from camera operation. This may be further refactored later. Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Reviewed-by: Niklas Söderlund <niklas.soderlund@ragnatech.se>
2020-01-12v4l2: camera: Handle memory mapping of buffers directlyNiklas Söderlund
In the upcoming FrameBuffer API the memory mapping of buffers will be left to the user of the FrameBuffer objects. Prepare the V4L2 compatibility layer to this upcoming change to ease conversion to the new API. Signed-off-by: Niklas Söderlund <niklas.soderlund@ragnatech.se> Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2020-01-07v4l2: compat_manager: Move file operations to new struct FileOperationsLaurent Pinchart
Create a new FileOperations structure to hold all the dynamically-loaded C library file operations. The file operations are now exposed publicly, to prepare for usage of mmap in the V4L2CameraProxy. A new template helper function is added to retrieve a symbol with dlsym() with proper casting to simplify the V4L2CompatManager constructor. Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com> Reviewed-by: Niklas Söderlund <niklas.soderlund@ragnatech.se>
2020-01-03v4l2: v4l2_compat: Add V4L2 compatibility layerPaul Elder
Add libcamera V4L2 compatibility layer. This initial implementation supports the minimal set of V4L2 operations, which allows getting, setting, and enumerating formats, and streaming frames from a video device. Some data about the wrapped V4L2 video device are hardcoded. Add a build option named 'v4l2' and adjust the build system to selectively compile the V4L2 compatibility layer. For now we match the V4L2 device node to a libcamera camera based on a devnum that a pipeline handler may optionally map to a libcamera camera. Signed-off-by: Paul Elder <paul.elder@ideasonboard.com> Reviewed-by: Jacopo Mondi <jacopo@jmondi.org> Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
id='n442' href='#n442'>442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730
/* SPDX-License-Identifier: LGPL-2.1-or-later */
/*
 * Copyright (C) 2020, Linaro
 *
 * viewfinderGL.cpp - OpenGL Viewfinder for rendering by OpenGL shader
 */

#include "viewfinder_gl.h"

#include <QByteArray>
#include <QFile>
#include <QImage>

#include <libcamera/formats.h>

#include "../cam/image.h"

static const QList<libcamera::PixelFormat> supportedFormats{
	/* YUV - packed (single plane) */
	libcamera::formats::UYVY,
	libcamera::formats::VYUY,
	libcamera::formats::YUYV,
	libcamera::formats::YVYU,
	/* YUV - semi planar (two planes) */
	libcamera::formats::NV12,
	libcamera::formats::NV21,
	libcamera::formats::NV16,
	libcamera::formats::NV61,
	libcamera::formats::NV24,
	libcamera::formats::NV42,
	/* YUV - fully planar (three planes) */
	libcamera::formats::YUV420,
	libcamera::formats::YVU420,
	/* RGB */
	libcamera::formats::ABGR8888,
	libcamera::formats::ARGB8888,
	libcamera::formats::BGRA8888,
	libcamera::formats::RGBA8888,
	libcamera::formats::BGR888,
	libcamera::formats::RGB888,
	/* Raw Bayer 8-bit */
	libcamera::formats::SBGGR8,
	libcamera::formats::SGBRG8,
	libcamera::formats::SGRBG8,
	libcamera::formats::SRGGB8,
	/* Raw Bayer 10-bit packed */
	libcamera::formats::SBGGR10_CSI2P,
	libcamera::formats::SGBRG10_CSI2P,
	libcamera::formats::SGRBG10_CSI2P,
	libcamera::formats::SRGGB10_CSI2P,
	/* Raw Bayer 12-bit packed */
	libcamera::formats::SBGGR12_CSI2P,
	libcamera::formats::SGBRG12_CSI2P,
	libcamera::formats::SGRBG12_CSI2P,
	libcamera::formats::SRGGB12_CSI2P,
};

ViewFinderGL::ViewFinderGL(QWidget *parent)
	: QOpenGLWidget(parent), buffer_(nullptr), image_(nullptr),
	  vertexBuffer_(QOpenGLBuffer::VertexBuffer)
{
}

ViewFinderGL::~ViewFinderGL()
{
	removeShader();
}

const QList<libcamera::PixelFormat> &ViewFinderGL::nativeFormats() const
{
	return supportedFormats;
}

int ViewFinderGL::setFormat(const libcamera::PixelFormat &format,
			    const QSize &size)
{
	if (format != format_) {
		/*
		 * If the fragment already exists, remove it and create a new
		 * one for the new format.
		 */
		if (shaderProgram_.isLinked()) {
			shaderProgram_.release();
			shaderProgram_.removeShader(fragmentShader_.get());
			fragmentShader_.reset();
		}

		if (!selectFormat(format))
			return -1;

		format_ = format;
	}

	size_ = size;

	updateGeometry();
	return 0;
}

void ViewFinderGL::stop()
{
	if (buffer_) {
		renderComplete(buffer_);
		buffer_ = nullptr;
		image_ = nullptr;
	}
}

QImage ViewFinderGL::getCurrentImage()
{
	QMutexLocker locker(&mutex_);

	return grabFramebuffer();
}

void ViewFinderGL::render(libcamera::FrameBuffer *buffer, Image *image)
{
	if (buffer_)
		renderComplete(buffer_);

	image_ = image;
	/*
	 * \todo Get the stride from the buffer instead of computing it naively
	 */
	stride_ = buffer->metadata().planes()[0].bytesused / size_.height();
	update();
	buffer_ = buffer;
}

bool ViewFinderGL::selectFormat(const libcamera::PixelFormat &format)
{
	bool ret = true;

	/* Set min/mag filters to GL_LINEAR by default. */
	textureMinMagFilters_ = GL_LINEAR;

	/* Use identity.vert as the default vertex shader. */
	vertexShaderFile_ = ":identity.vert";

	fragmentShaderDefines_.clear();

	switch (format) {
	case libcamera::formats::NV12:
		horzSubSample_ = 2;
		vertSubSample_ = 2;
		fragmentShaderDefines_.append("#define YUV_PATTERN_UV");
		fragmentShaderFile_ = ":YUV_2_planes.frag";
		break;
	case libcamera::formats::NV21:
		horzSubSample_ = 2;
		vertSubSample_ = 2;
		fragmentShaderDefines_.append("#define YUV_PATTERN_VU");
		fragmentShaderFile_ = ":YUV_2_planes.frag";
		break;
	case libcamera::formats::NV16:
		horzSubSample_ = 2;
		vertSubSample_ = 1;
		fragmentShaderDefines_.append("#define YUV_PATTERN_UV");
		fragmentShaderFile_ = ":YUV_2_planes.frag";
		break;
	case libcamera::formats::NV61:
		horzSubSample_ = 2;
		vertSubSample_ = 1;
		fragmentShaderDefines_.append("#define YUV_PATTERN_VU");
		fragmentShaderFile_ = ":YUV_2_planes.frag";
		break;
	case libcamera::formats::NV24:
		horzSubSample_ = 1;
		vertSubSample_ = 1;
		fragmentShaderDefines_.append("#define YUV_PATTERN_UV");
		fragmentShaderFile_ = ":YUV_2_planes.frag";
		break;
	case libcamera::formats::NV42:
		horzSubSample_ = 1;
		vertSubSample_ = 1;
		fragmentShaderDefines_.append("#define YUV_PATTERN_VU");
		fragmentShaderFile_ = ":YUV_2_planes.frag";
		break;
	case libcamera::formats::YUV420:
		horzSubSample_ = 2;
		vertSubSample_ = 2;
		fragmentShaderFile_ = ":YUV_3_planes.frag";
		break;
	case libcamera::formats::YVU420:
		horzSubSample_ = 2;
		vertSubSample_ = 2;
		fragmentShaderFile_ = ":YUV_3_planes.frag";
		break;
	case libcamera::formats::UYVY:
		fragmentShaderDefines_.append("#define YUV_PATTERN_UYVY");
		fragmentShaderFile_ = ":YUV_packed.frag";
		break;
	case libcamera::formats::VYUY:
		fragmentShaderDefines_.append("#define YUV_PATTERN_VYUY");
		fragmentShaderFile_ = ":YUV_packed.frag";
		break;
	case libcamera::formats::YUYV:
		fragmentShaderDefines_.append("#define YUV_PATTERN_YUYV");
		fragmentShaderFile_ = ":YUV_packed.frag";
		break;
	case libcamera::formats::YVYU:
		fragmentShaderDefines_.append("#define YUV_PATTERN_YVYU");
		fragmentShaderFile_ = ":YUV_packed.frag";
		break;
	case libcamera::formats::ABGR8888:
		fragmentShaderDefines_.append("#define RGB_PATTERN rgb");
		fragmentShaderFile_ = ":RGB.frag";
		break;
	case libcamera::formats::ARGB8888:
		fragmentShaderDefines_.append("#define RGB_PATTERN bgr");
		fragmentShaderFile_ = ":RGB.frag";
		break;
	case libcamera::formats::BGRA8888:
		fragmentShaderDefines_.append("#define RGB_PATTERN gba");
		fragmentShaderFile_ = ":RGB.frag";
		break;
	case libcamera::formats::RGBA8888:
		fragmentShaderDefines_.append("#define RGB_PATTERN abg");
		fragmentShaderFile_ = ":RGB.frag";
		break;
	case libcamera::formats::BGR888:
		fragmentShaderDefines_.append("#define RGB_PATTERN rgb");
		fragmentShaderFile_ = ":RGB.frag";
		break;
	case libcamera::formats::RGB888:
		fragmentShaderDefines_.append("#define RGB_PATTERN bgr");
		fragmentShaderFile_ = ":RGB.frag";
		break;
	case libcamera::formats::SBGGR8:
		firstRed_.setX(1.0);
		firstRed_.setY(1.0);
		vertexShaderFile_ = ":bayer_8.vert";
		fragmentShaderFile_ = ":bayer_8.frag";
		textureMinMagFilters_ = GL_NEAREST;
		break;
	case libcamera::formats::SGBRG8:
		firstRed_.setX(0.0);
		firstRed_.setY(1.0);
		vertexShaderFile_ = ":bayer_8.vert";
		fragmentShaderFile_ = ":bayer_8.frag";
		textureMinMagFilters_ = GL_NEAREST;
		break;
	case libcamera::formats::SGRBG8:
		firstRed_.setX(1.0);
		firstRed_.setY(0.0);
		vertexShaderFile_ = ":bayer_8.vert";
		fragmentShaderFile_ = ":bayer_8.frag";
		textureMinMagFilters_ = GL_NEAREST;
		break;
	case libcamera::formats::SRGGB8:
		firstRed_.setX(0.0);
		firstRed_.setY(0.0);
		vertexShaderFile_ = ":bayer_8.vert";
		fragmentShaderFile_ = ":bayer_8.frag";
		textureMinMagFilters_ = GL_NEAREST;
		break;
	case libcamera::formats::SBGGR10_CSI2P:
		firstRed_.setX(1.0);
		firstRed_.setY(1.0);
		fragmentShaderDefines_.append("#define RAW10P");
		fragmentShaderFile_ = ":bayer_1x_packed.frag";
		textureMinMagFilters_ = GL_NEAREST;
		break;
	case libcamera::formats::SGBRG10_CSI2P:
		firstRed_.setX(0.0);
		firstRed_.setY(1.0);
		fragmentShaderDefines_.append("#define RAW10P");
		fragmentShaderFile_ = ":bayer_1x_packed.frag";
		textureMinMagFilters_ = GL_NEAREST;
		break;
	case libcamera::formats::SGRBG10_CSI2P:
		firstRed_.setX(1.0);
		firstRed_.setY(0.0);
		fragmentShaderDefines_.append("#define RAW10P");
		fragmentShaderFile_ = ":bayer_1x_packed.frag";
		textureMinMagFilters_ = GL_NEAREST;
		break;
	case libcamera::formats::SRGGB10_CSI2P:
		firstRed_.setX(0.0);
		firstRed_.setY(0.0);
		fragmentShaderDefines_.append("#define RAW10P");
		fragmentShaderFile_ = ":bayer_1x_packed.frag";
		textureMinMagFilters_ = GL_NEAREST;
		break;
	case libcamera::formats::SBGGR12_CSI2P:
		firstRed_.setX(1.0);
		firstRed_.setY(1.0);
		fragmentShaderDefines_.append("#define RAW12P");
		fragmentShaderFile_ = ":bayer_1x_packed.frag";
		textureMinMagFilters_ = GL_NEAREST;
		break;
	case libcamera::formats::SGBRG12_CSI2P:
		firstRed_.setX(0.0);
		firstRed_.setY(1.0);
		fragmentShaderDefines_.append("#define RAW12P");
		fragmentShaderFile_ = ":bayer_1x_packed.frag";
		textureMinMagFilters_ = GL_NEAREST;
		break;
	case libcamera::formats::SGRBG12_CSI2P:
		firstRed_.setX(1.0);
		firstRed_.setY(0.0);
		fragmentShaderDefines_.append("#define RAW12P");
		fragmentShaderFile_ = ":bayer_1x_packed.frag";
		textureMinMagFilters_ = GL_NEAREST;
		break;
	case libcamera::formats::SRGGB12_CSI2P:
		firstRed_.setX(0.0);
		firstRed_.setY(0.0);
		fragmentShaderDefines_.append("#define RAW12P");
		fragmentShaderFile_ = ":bayer_1x_packed.frag";
		textureMinMagFilters_ = GL_NEAREST;
		break;
	default:
		ret = false;
		qWarning() << "[ViewFinderGL]:"
			   << "format not supported.";
		break;
	};

	return ret;
}

bool ViewFinderGL::createVertexShader()
{
	/* Create Vertex Shader */
	vertexShader_ = std::make_unique<QOpenGLShader>(QOpenGLShader::Vertex, this);

	/* Compile the vertex shader */
	if (!vertexShader_->compileSourceFile(vertexShaderFile_)) {
		qWarning() << "[ViewFinderGL]:" << vertexShader_->log();
		return false;
	}

	shaderProgram_.addShader(vertexShader_.get());
	return true;
}

bool ViewFinderGL::createFragmentShader()
{
	int attributeVertex;
	int attributeTexture;

	/*
	 * Create the fragment shader, compile it, and add it to the shader
	 * program. The #define macros stored in fragmentShaderDefines_, if
	 * any, are prepended to the source code.
	 */
	fragmentShader_ = std::make_unique<QOpenGLShader>(QOpenGLShader::Fragment, this);

	QFile file(fragmentShaderFile_);
	if (!file.open(QIODevice::ReadOnly | QIODevice::Text)) {
		qWarning() << "Shader" << fragmentShaderFile_ << "not found";
		return false;
	}

	QString defines = fragmentShaderDefines_.join('\n') + "\n";
	QByteArray src = file.readAll();
	src.prepend(defines.toUtf8());

	if (!fragmentShader_->compileSourceCode(src)) {
		qWarning() << "[ViewFinderGL]:" << fragmentShader_->log();
		return false;
	}

	shaderProgram_.addShader(fragmentShader_.get());

	/* Link shader pipeline */
	if (!shaderProgram_.link()) {
		qWarning() << "[ViewFinderGL]:" << shaderProgram_.log();
		close();
	}

	/* Bind shader pipeline for use */
	if (!shaderProgram_.bind()) {
		qWarning() << "[ViewFinderGL]:" << shaderProgram_.log();
		close();
	}

	attributeVertex = shaderProgram_.attributeLocation("vertexIn");
	attributeTexture = shaderProgram_.attributeLocation("textureIn");

	shaderProgram_.enableAttributeArray(attributeVertex);
	shaderProgram_.setAttributeBuffer(attributeVertex,
					  GL_FLOAT,
					  0,
					  2,
					  2 * sizeof(GLfloat));

	shaderProgram_.enableAttributeArray(attributeTexture);
	shaderProgram_.setAttributeBuffer(attributeTexture,
					  GL_FLOAT,
					  8 * sizeof(GLfloat),
					  2,
					  2 * sizeof(GLfloat));

	textureUniformY_ = shaderProgram_.uniformLocation("tex_y");
	textureUniformU_ = shaderProgram_.uniformLocation("tex_u");
	textureUniformV_ = shaderProgram_.uniformLocation("tex_v");
	textureUniformStep_ = shaderProgram_.uniformLocation("tex_step");
	textureUniformSize_ = shaderProgram_.uniformLocation("tex_size");
	textureUniformBayerFirstRed_ = shaderProgram_.uniformLocation("tex_bayer_first_red");

	/* Create the textures. */
	for (std::unique_ptr<QOpenGLTexture> &texture : textures_) {
		if (texture)
			continue;

		texture = std::make_unique<QOpenGLTexture>(QOpenGLTexture::Target2D);
		texture->create();
	}

	return true;
}

void ViewFinderGL::configureTexture(QOpenGLTexture &texture)
{
	glBindTexture(GL_TEXTURE_2D, texture.textureId());
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER,
			textureMinMagFilters_);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,
			textureMinMagFilters_);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
}

void ViewFinderGL::removeShader()
{
	if (shaderProgram_.isLinked()) {
		shaderProgram_.release();
		shaderProgram_.removeAllShaders();
	}
}

void ViewFinderGL::initializeGL()
{
	initializeOpenGLFunctions();
	glEnable(GL_TEXTURE_2D);
	glDisable(GL_DEPTH_TEST);

	static const GLfloat coordinates[2][4][2]{
		{
			/* Vertex coordinates */
			{ -1.0f, -1.0f },
			{ -1.0f, +1.0f },
			{ +1.0f, +1.0f },
			{ +1.0f, -1.0f },
		},
		{
			/* Texture coordinates */
			{ 0.0f, 1.0f },
			{ 0.0f, 0.0f },
			{ 1.0f, 0.0f },
			{ 1.0f, 1.0f },
		},
	};

	vertexBuffer_.create();
	vertexBuffer_.bind();
	vertexBuffer_.allocate(coordinates, sizeof(coordinates));

	/* Create Vertex Shader */
	if (!createVertexShader())
		qWarning() << "[ViewFinderGL]: create vertex shader failed.";

	glClearColor(1.0f, 1.0f, 1.0f, 0.0f);
}

void ViewFinderGL::doRender()
{
	switch (format_) {
	case libcamera::formats::NV12:
	case libcamera::formats::NV21:
	case libcamera::formats::NV16:
	case libcamera::formats::NV61:
	case libcamera::formats::NV24:
	case libcamera::formats::NV42:
		/* Activate texture Y */
		glActiveTexture(GL_TEXTURE0);
		configureTexture(*textures_[0]);
		glTexImage2D(GL_TEXTURE_2D,
			     0,
			     GL_LUMINANCE,
			     size_.width(),
			     size_.height(),
			     0,
			     GL_LUMINANCE,
			     GL_UNSIGNED_BYTE,
			     image_->data(0).data());
		shaderProgram_.setUniformValue(textureUniformY_, 0);

		/* Activate texture UV/VU */
		glActiveTexture(GL_TEXTURE1);
		configureTexture(*textures_[1]);
		glTexImage2D(GL_TEXTURE_2D,
			     0,
			     GL_LUMINANCE_ALPHA,
			     size_.width() / horzSubSample_,
			     size_.height() / vertSubSample_,
			     0,
			     GL_LUMINANCE_ALPHA,
			     GL_UNSIGNED_BYTE,
			     image_->data(1).data());
		shaderProgram_.setUniformValue(textureUniformU_, 1);
		break;

	case libcamera::formats::YUV420:
		/* Activate texture Y */
		glActiveTexture(GL_TEXTURE0);
		configureTexture(*textures_[0]);
		glTexImage2D(GL_TEXTURE_2D,
			     0,
			     GL_LUMINANCE,
			     size_.width(),
			     size_.height(),
			     0,
			     GL_LUMINANCE,
			     GL_UNSIGNED_BYTE,
			     image_->data(0).data());
		shaderProgram_.setUniformValue(textureUniformY_, 0);

		/* Activate texture U */
		glActiveTexture(GL_TEXTURE1);
		configureTexture(*textures_[1]);
		glTexImage2D(GL_TEXTURE_2D,
			     0,
			     GL_LUMINANCE,
			     size_.width() / horzSubSample_,
			     size_.height() / vertSubSample_,
			     0,
			     GL_LUMINANCE,
			     GL_UNSIGNED_BYTE,
			     image_->data(1).data());
		shaderProgram_.setUniformValue(textureUniformU_, 1);

		/* Activate texture V */
		glActiveTexture(GL_TEXTURE2);
		configureTexture(*textures_[2]);
		glTexImage2D(GL_TEXTURE_2D,
			     0,
			     GL_LUMINANCE,
			     size_.width() / horzSubSample_,
			     size_.height() / vertSubSample_,
			     0,
			     GL_LUMINANCE,
			     GL_UNSIGNED_BYTE,
			     image_->data(2).data());
		shaderProgram_.setUniformValue(textureUniformV_, 2);
		break;

	case libcamera::formats::YVU420:
		/* Activate texture Y */
		glActiveTexture(GL_TEXTURE0);
		configureTexture(*textures_[0]);
		glTexImage2D(GL_TEXTURE_2D,
			     0,
			     GL_LUMINANCE,
			     size_.width(),
			     size_.height(),
			     0,
			     GL_LUMINANCE,
			     GL_UNSIGNED_BYTE,
			     image_->data(0).data());
		shaderProgram_.setUniformValue(textureUniformY_, 0);

		/* Activate texture V */
		glActiveTexture(GL_TEXTURE2);
		configureTexture(*textures_[2]);
		glTexImage2D(GL_TEXTURE_2D,
			     0,
			     GL_LUMINANCE,
			     size_.width() / horzSubSample_,
			     size_.height() / vertSubSample_,
			     0,
			     GL_LUMINANCE,
			     GL_UNSIGNED_BYTE,
			     image_->data(1).data());
		shaderProgram_.setUniformValue(textureUniformV_, 2);

		/* Activate texture U */
		glActiveTexture(GL_TEXTURE1);
		configureTexture(*textures_[1]);
		glTexImage2D(GL_TEXTURE_2D,
			     0,
			     GL_LUMINANCE,
			     size_.width() / horzSubSample_,
			     size_.height() / vertSubSample_,
			     0,
			     GL_LUMINANCE,
			     GL_UNSIGNED_BYTE,
			     image_->data(2).data());
		shaderProgram_.setUniformValue(textureUniformU_, 1);
		break;

	case libcamera::formats::UYVY:
	case libcamera::formats::VYUY:
	case libcamera::formats::YUYV:
	case libcamera::formats::YVYU:
		/*
		 * Packed YUV formats are stored in a RGBA texture to match the
		 * OpenGL texel size with the 4 bytes repeating pattern in YUV.
		 * The texture width is thus half of the image_ with.
		 */
		glActiveTexture(GL_TEXTURE0);
		configureTexture(*textures_[0]);
		glTexImage2D(GL_TEXTURE_2D,
			     0,
			     GL_RGBA,
			     size_.width() / 2,
			     size_.height(),
			     0,
			     GL_RGBA,
			     GL_UNSIGNED_BYTE,
			     image_->data(0).data());
		shaderProgram_.setUniformValue(textureUniformY_, 0);

		/*
		 * The shader needs the step between two texture pixels in the
		 * horizontal direction, expressed in texture coordinate units
		 * ([0, 1]). There are exactly width - 1 steps between the
		 * leftmost and rightmost texels.
		 */
		shaderProgram_.setUniformValue(textureUniformStep_,
					       1.0f / (size_.width() / 2 - 1),
					       1.0f /* not used */);
		break;

	case libcamera::formats::ABGR8888:
	case libcamera::formats::ARGB8888:
	case libcamera::formats::BGRA8888:
	case libcamera::formats::RGBA8888:
		glActiveTexture(GL_TEXTURE0);
		configureTexture(*textures_[0]);
		glTexImage2D(GL_TEXTURE_2D,
			     0,
			     GL_RGBA,
			     size_.width(),
			     size_.height(),
			     0,
			     GL_RGBA,
			     GL_UNSIGNED_BYTE,
			     image_->data(0).data());
		shaderProgram_.setUniformValue(textureUniformY_, 0);
		break;

	case libcamera::formats::BGR888:
	case libcamera::formats::RGB888:
		glActiveTexture(GL_TEXTURE0);
		configureTexture(*textures_[0]);
		glTexImage2D(GL_TEXTURE_2D,
			     0,
			     GL_RGB,
			     size_.width(),
			     size_.height(),
			     0,
			     GL_RGB,
			     GL_UNSIGNED_BYTE,
			     image_->data(0).data());
		shaderProgram_.setUniformValue(textureUniformY_, 0);
		break;

	case libcamera::formats::SBGGR8:
	case libcamera::formats::SGBRG8:
	case libcamera::formats::SGRBG8:
	case libcamera::formats::SRGGB8:
	case libcamera::formats::SBGGR10_CSI2P:
	case libcamera::formats::SGBRG10_CSI2P:
	case libcamera::formats::SGRBG10_CSI2P:
	case libcamera::formats::SRGGB10_CSI2P:
	case libcamera::formats::SBGGR12_CSI2P:
	case libcamera::formats::SGBRG12_CSI2P:
	case libcamera::formats::SGRBG12_CSI2P:
	case libcamera::formats::SRGGB12_CSI2P:
		/*
		 * Raw Bayer 8-bit, and packed raw Bayer 10-bit/12-bit formats
		 * are stored in a GL_LUMINANCE texture. The texture width is
		 * equal to the stride.
		 */
		glActiveTexture(GL_TEXTURE0);
		configureTexture(*textures_[0]);
		glTexImage2D(GL_TEXTURE_2D,
			     0,
			     GL_LUMINANCE,
			     stride_,
			     size_.height(),
			     0,
			     GL_LUMINANCE,
			     GL_UNSIGNED_BYTE,
			     image_->data(0).data());
		shaderProgram_.setUniformValue(textureUniformY_, 0);
		shaderProgram_.setUniformValue(textureUniformBayerFirstRed_,
					       firstRed_);
		shaderProgram_.setUniformValue(textureUniformSize_,
					       size_.width(), /* in pixels */
					       size_.height());
		shaderProgram_.setUniformValue(textureUniformStep_,
					       1.0f / (stride_ - 1),
					       1.0f / (size_.height() - 1));
		break;

	default:
		break;
	};
}

void ViewFinderGL::paintGL()
{
	if (!fragmentShader_)
		if (!createFragmentShader()) {
			qWarning() << "[ViewFinderGL]:"
				   << "create fragment shader failed.";
		}

	if (image_) {
		glClearColor(0.0, 0.0, 0.0, 1.0);
		glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

		doRender();
		glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
	}
}

void ViewFinderGL::resizeGL(int w, int h)
{
	glViewport(0, 0, w, h);
}

QSize ViewFinderGL::sizeHint() const
{
	return size_.isValid() ? size_ : QSize(640, 480);
}