************* Documentation ************* Feature Requirements ==================== Device enumeration ------------------ The library shall support enumerating all camera devices available in the system, including both fixed cameras and hotpluggable cameras. It shall support cameras plugged and unplugged after the initialization of the library, and shall offer a mechanism to notify applications of camera plug and unplug. The following types of cameras shall be supported: * Internal cameras designed for point-and-shoot still image and video capture usage, either controlled directly by the CPU, or exposed through an internal USB bus as a UVC device. * External UVC cameras designed for video conferencing usage. Other types of camera, including analog cameras, depth cameras, thermal cameras, external digital picture or movie cameras, are out of scope for this project. A hardware device that includes independent camera sensors, such as front and back sensors in a phone, shall be considered as multiple camera devices for the purpose of this library. Independent Camera Devices -------------------------- When multiple cameras are present in the system and are able to operate independently from each other, the library shall expose them as multiple camera devices and support parallel operation without any additional usage restriction apart from the limitations inherent to the hardware (such as memory bandwidth, CPU usage or number of CSI-2 receivers for instance). Independent processes shall be able to use independent cameras devices without interfering with each other. A single camera device shall be usable by a single process at a time. Multiple streams support ------------------------ The library shall support multiple video streams running in parallel for each camera device, within the limits imposed by the system. Per frame controls ------------------ The library shall support controlling capture parameters for each stream on a per-frame basis, on a best effort basis based on the capabilities of the hardware and underlying software stack (including kernel drivers and firmware). It shall apply capture parameters to the frame they target, and report the value of the parameters that have effectively been used for each captured frame. When a camera device supports multiple streams, the library shall allow both control of each stream independently, and control of multiple streams together. Streams that are controlled together shall be synchronized. No synchronization is required for streams controlled independently. Capability Enumeration ---------------------- The library shall expose capabilities of each camera device in a way that allows applications to discover those capabilities dynamically. Applications shall be allowed to cache capabilities for as long as they are using the library. If capabilities can change at runtime, the library shall offer a mechanism to notify applications of such changes. Applications shall not cache capabilities in long term storage between runs. Capabilities shall be discovered dynamically at runtime from the device when possible, and may come, in part or in full, from platform configuration data. Device Profiles --------------- The library may define different camera device profiles, each with a minimum set of required capabilities. Applications may use those profiles to quickly determine the level of features exposed by a device without parsing the full list of capabilities. Camera devices may implement additional capabilities on top of the minimum required set for the profile they expose. 3A and Image Enhancement Algorithms ----------------------------------- The camera devices shall implement auto exposure, auto gain and auto white balance. Camera devices that include a focus lens shall implement auto focus. Additional image enhancement algorithms, such as noise reduction or video stabilization, may be implemented. All algorithms may be implemented in hardware or firmware outside of the library, or in software in the library. They shall all be controllable by applications. The library shall be architectured to isolate the 3A and image enhancement algorithms in a component with a documented API, respectively called the 3A component and the 3A API. The 3A API shall be stable, and shall allow both open-source and closed-source implementations of the 3A component. The library may include statically-linked open-source 3A components, and shall support dynamically-linked open-source and closed-source 3A components. Closed-source 3A Component Sandboxing ------------------------------------- For security purposes, it may be desired to run closed-source 3A components in a separate process. The 3A API would in such a case be transported over IPC. The 3A API shall make it possible to use any IPC mechanism that supports passing file descriptors. The library may implement an IPC mechanism, and shall support third-party platform-specific IPC mechanisms through the implementation of a platform-specific 3A API wrapper. No modification to the library shall be needed to use such third-party IPC mechanisms. The 3A component shall not directly access any device node on the system. Such accesses shall instead be performed through the 3A API. The library shall validate all accesses and restrict them to what is absolutely required by 3A components. V4L2 Compatibility Layer ------------------------ The project shall support traditional V4L2 application through an additional libcamera wrapper library. The wrapper library shall trap all accesses to camera devices through LD_PRELOAD, and route them through libcamera to emulate a high-level V4L2 camera device. It shall expose camera device features on a best-effort basis, and aim for the level of features traditionally available from a UVC camera designed for video conferencing. Android Camera HAL v3 Compatibility ----------------------------------- The library API shall expose all the features required to implement an Android Camera HAL v3 on top of libcamera. Some features of the HAL may be omitted as long as they can be implemented separately in the HAL, such as JPEG encoding, or YUV reprocessing. Camera Stack ============ :: a c / +-------------+ +-------------+ +-------------+ +-------------+ p a | | Native | | Framework | | Native | | Android | p t | | V4L2 | | Application | | libcamera | | Camera | l i | | Application | | (gstreamer) | | Application | | Framework | i o \ +-------------+ +-------------+ +-------------+ +-------------+ n ^ ^ ^ ^ | | | | l a | | | | i d v v | v b a / +-------------+ +-------------+ | +-------------+ c p | | V4L2 | | Camera | | | Android | a t | | Compat. | | Framework | | | Camera | m a | | | | (gstreamer) | | | HAL | e t \ +-------------+ +-------------+ | +-------------+ r i ^ ^ | ^ a o | | | | n | | | | / | ,................................................ | | ! : Language : ! l f | | ! : Bindings : ! i r | | ! : (optional) : ! b a | | \...............................................' c m | | | | | a e | | | | | m w | v v v v e o | +----------------------------------------------------------------+ r r | | | a k | | libcamera | | | | \ +----------------------------------------------------------------+ ^ ^ ^ Userspace | | | ------------------------ | ---------------- | ---------------- | --------------- Kernel | | | v v v +-----------+ +-----------+ +-----------+ | Media | <--> | Video | <--> | V4L2 | | Device | | Device | | Subdev | +-----------+ +-----------+ +-----------+ The camera stack comprises four software layers. From bottom to top: * The kernel drivers control the camera hardware and expose a low-level interface to userspace through the Linux kernel V4L2 family of APIs (Media Controller API, V4L2 Video Device API and V4L2 Subdev API). * The libcamera framework is the core part of the stack. It handles all control of the camera devices in its core component, libcamera, and exposes a native C++ API to upper layers. Optional language bindings allow interfacing to libcamera from other programming languages. Those components live in the same source code repository and all together constitute the libcamera framework. * The libcamera adaptation is an umbrella term designating the components that interface to libcamera in other frameworks. Notable examples are a V4L2 compatibility layer, a gstreamer libcamera element, and an Android camera HAL implementation based on libcamera. Those components can live in the libcamera project source code in separate repositories, or move to their respective project's repository (for instance the gstreamer libcamera element).
* The applications and upper level frameworks are based on the libcamera framework or libcamera adaptation, and are outside of the scope of the libcamera project. libcamera Architecture ====================== :: ---------------------------< libcamera Public API >--------------------------- ^ ^ | | v v +-------------+ +-------------------------------------------------+ | Camera | | Camera Device | | Devices | | +---------------------------------------------+ | | Manager | | | Device-Agnostic | | +-------------+ | | | | ^ | | +------------------------+ | | | | | ~~~~~~~~~~~~~~~~~~~~~ | | | | | { +---------------+ } | | | | | } | ////Image//// | { | | | | | <-> | /Processing// | } | | | | | } | /Algorithms// | { | | | | | { +---------------+ } | | | | | ~~~~~~~~~~~~~~~~~~~~~ | | | | | ======================== | | | | | +---------------+ | | | | | | //Pipeline/// | | | | | | <-> | ///Handler/// | | | | | | | ///////////// | | | | +--------------------+ +---------------+ | | | Device-Specific | | +-------------------------------------------------+ | ^ ^ | | | v v v +--------------------------------------------------------------------+ | Helpers and Support Classes | | +-------------+ +-------------+ +-------------+ +-------------+ | | | MC & V4L2 | | Buffers | | Sandboxing | | Plugins | | | | Support | | Allocator | | IPC | | Manager | | | +-------------+ +-------------+ +-------------+ +-------------+ | | +-------------+ +-------------+ | | | Pipeline | | ... | | | | Runner | | | | | +-------------+ +-------------+ | +--------------------------------------------------------------------+ /// Device-Specific Components ~~~ Sandboxing While offering a unified API towards upper layers, and presenting itself as a single library, libcamera isn't monolithic. It exposes multiple components through its public API, is built around a set of separate helpers internally, uses device-specific components and can load dynamic plugins. Camera Devices Manager The Camera Devices Manager provides a view of available cameras in the system. It performs cold enumeration and runtime camera management, and supports a hotplug notification mechanism in its public API. To avoid the cost associated with cold enumeration of all devices at application start, and to arbitrate concurrent access to camera devices, the Camera Devices Manager could later be split to a separate service, possibly with integration in platform-specific device management. Camera Device The Camera Device represents a camera device to upper layers. It exposes full control of the device through the public API, and is thus the highest level object exposed by libcamera. Camera Device instances are created by the Camera Devices Manager. An optional method to create new instances could be exposed through the public API to speed up initialization when the upper layer knows how to directly address camera devices present in the system. Pipeline Handler The Pipeline Handler manages complex pipelines exposed by the kernel drivers through the Media Controller and V4L2 APIs. It abstracts pipeline handling to hide device-specific details to the rest of the library, and implements both pipeline configuration based on stream configuration, and pipeline runtime execution and scheduling when needed by the device. This component is device-specific and is part of the libcamera code base. As such it is covered by the same free software license as the rest of libcamera and needs to be contributed upstream by device vendors. The Pipeline Handler lives in the same process as the rest of the library, and has access to all helpers and kernel camera-related devices. Image Processing Algorithms Together with the hardware image processing and hardware statistics collection, the Image Processing Algorithms implement 3A (Auto-Exposure, Auto-White Balance and Auto-Focus) and other algorithms. They run on the CPU and interact with the kernel camera devices to control hardware image processing based on the parameters supplied by upper layers, closing the control loop of the ISP. This component is device-specific and is loaded as an external plugin. It can be part of the libcamera code base, in which case it is covered by the same license, or provided externally as an open-source or closed-source component. The component is sandboxed and can only interact with libcamera through internal APIs specifically marked as such. In particular it will have no direct access to kernel camera devices, and all its accesses to image and metadata will be mediated by dmabuf instances explicitly passed to the component. The component must be prepared to run in a process separate from the main libcamera process, and to have a very restricted view of the system, including no access to networking APIs and limited access to file systems. The sandboxing mechanism isn't defined by libcamera. One example implementation will be provided as part of the project, and platforms vendors will be able to provide their own sandboxing mechanism as a plugin. libcamera should provide a basic implementation of Image Processing Algorithms, to serve as a reference for the internal API. Device vendors are expected to provide a full-fledged implementation compatible with their Pipeline Handler. One goal of the libcamera project is to create an environment in which the community will be able to compete with the closed-source vendor binaries and develop a high quality open source implementation. Helpers and Support Classes While Pipeline Handlers are device-specific, implementations are expected to share code due to usage of identical APIs towards the kernel camera drivers and the Image Processing Algorithms. This includes without limitation handling of the MC and V4L2 APIs, buffer management through dmabuf, and pipeline discovery, configuration and scheduling. Such code will be factored out to helpers when applicable. Other parts of libcamera will also benefit from factoring code out to self-contained support classes, even if such code is present only once in the code base, in order to keep the source code clean and easy to read. This should be the case for instance for plugin management. V4L2 Compatibility Layer ------------------------ V4L2 compatibility is achieved through a shared library that traps all accesses to camera devices and routes them to libcamera to emulate high-level V4L2 camera devices. It is injected in a process address space through LD_PRELOAD and is completely transparent for applications. The compatibility layer exposes camera device features on a best-effort basis, and aims for the level of features traditionally available from a UVC camera designed for video conferencing. Android Camera HAL ------------------ Camera support for Android is achieved through a generic Android camera HAL implementation on top of libcamera. The HAL will implement internally features required by Android and missing from libcamera, such as JPEG encoding support. The Android camera HAL implementation will initially target the LIMITED hardware level, with support for the FULL level then being gradually implemented. a> 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787/* SPDX-License-Identifier: BSD-2-Clause */
/*
* Copyright (C) 2019, Raspberry Pi Ltd
*
* alsc.cpp - ALSC (auto lens shading correction) control algorithm
*/
#include <math.h>
#include <numeric>
#include <libcamera/base/log.h>
#include <libcamera/base/span.h>
#include "../awb_status.h"
#include "alsc.h"
/* Raspberry Pi ALSC (Auto Lens Shading Correction) algorithm. */
using namespace RPiController;
using namespace libcamera;
LOG_DEFINE_CATEGORY(RPiAlsc)
#define NAME "rpi.alsc"
static const int X = AlscCellsX;
static const int Y = AlscCellsY;
static const int XY = X * Y;
static const double InsufficientData = -1.0;
Alsc::Alsc(Controller *controller)
: Algorithm(controller)
{
asyncAbort_ = asyncStart_ = asyncStarted_ = asyncFinished_ = false;
asyncThread_ = std::thread(std::bind(&Alsc::asyncFunc, this));
}
Alsc::~Alsc()
{
{
std::lock_guard<std::mutex> lock(mutex_);
asyncAbort_ = true;
}
asyncSignal_.notify_one();
asyncThread_.join();
}
char const *Alsc::name() const
{
return NAME;
}
static void generateLut(double *lut, boost::property_tree::ptree const ¶ms)
{
double cstrength = params.get<double>("corner_strength", 2.0);
if (cstrength <= 1.0)
LOG(RPiAlsc, Fatal) << "Alsc: corner_strength must be > 1.0";
double asymmetry = params.get<double>("asymmetry", 1.0);
if (asymmetry < 0)
LOG(RPiAlsc, Fatal) << "Alsc: asymmetry must be >= 0";
double f1 = cstrength - 1, f2 = 1 + sqrt(cstrength);
double R2 = X * Y / 4 * (1 + asymmetry * asymmetry);
int num = 0;
for (int y = 0; y < Y; y++) {
for (int x = 0; x < X; x++) {
double dy = y - Y / 2 + 0.5,
dx = (x - X / 2 + 0.5) * asymmetry;
double r2 = (dx * dx + dy * dy) / R2;
lut[num++] =
(f1 * r2 + f2) * (f1 * r2 + f2) /
(f2 * f2); /* this reproduces the cos^4 rule */
}
}
}
static void readLut(double *lut, boost::property_tree::ptree const ¶ms)
{
int num = 0;
const int maxNum = XY;
for (auto &p : params) {
if (num == maxNum)
LOG(RPiAlsc, Fatal) << "Alsc: too many entries in LSC table";
lut[num++] = p.second.get_value<double>();
}
if (num < maxNum)
LOG(RPiAlsc, Fatal) << "Alsc: too few entries in LSC table";
}
static void readCalibrations(std::vector<AlscCalibration> &calibrations,
boost::property_tree::ptree const ¶ms,
std::string const &name)
{
if (params.get_child_optional(name)) {
double lastCt = 0;
for (auto &p : params.get_child(name)) {
double ct = p.second.get<double>("ct");
if (ct <= lastCt)
LOG(RPiAlsc, Fatal)
<< "Alsc: entries in " << name << " must be in increasing ct order";
AlscCalibration calibration;
calibration.ct = lastCt = ct;
boost::property_tree::ptree const &table =
p.second.get_child("table");
int num = 0;
for (auto it = table.begin(); it != table.end(); it++) {
if (num == XY)
LOG(RPiAlsc, Fatal)
<< "Alsc: too many values for ct " << ct << " in " << name;
calibration.table[num++] =
it->second.get_value<double>();
}
if (num != XY)
LOG(RPiAlsc, Fatal)
<< "Alsc: too few values for ct " << ct << " in " << name;
calibrations.push_back(calibration);
LOG(RPiAlsc, Debug)
<< "Read " << name << " calibration for ct " << ct;
}
}
}
void Alsc::read(boost::property_tree::ptree const ¶ms)
{
config_.framePeriod = params.get<uint16_t>("frame_period", 12);
config_.startupFrames = params.get<uint16_t>("startup_frames", 10);
config_.speed = params.get<double>("speed", 0.05);
double sigma = params.get<double>("sigma", 0.01);
config_.sigmaCr = params.get<double>("sigma_Cr", sigma);
config_.sigmaCb = params.get<double>("sigma_Cb", sigma);
config_.minCount = params.get<double>("min_count", 10.0);
config_.minG = params.get<uint16_t>("min_G", 50);
config_.omega = params.get<double>("omega", 1.3);
config_.nIter = params.get<uint32_t>("n_iter", X + Y);
config_.luminanceStrength =
params.get<double>("luminance_strength", 1.0);
for (int i = 0; i < XY; i++)
config_.luminanceLut[i] = 1.0;
if (params.get_child_optional("corner_strength"))
generateLut(config_.luminanceLut, params);
else if (params.get_child_optional("luminance_lut"))
readLut(config_.luminanceLut,
params.get_child("luminance_lut"));
else
LOG(RPiAlsc, Warning)
<< "no luminance table - assume unity everywhere";
readCalibrations(config_.calibrationsCr, params, "calibrations_Cr");
readCalibrations(config_.calibrationsCb, params, "calibrations_Cb");
config_.defaultCt = params.get<double>("default_ct", 4500.0);
config_.threshold = params.get<double>("threshold", 1e-3);
config_.lambdaBound = params.get<double>("lambda_bound", 0.05);
}
static double getCt(Metadata *metadata, double defaultCt);
static void getCalTable(double ct, std::vector<AlscCalibration> const &calibrations,
double calTable[XY]);
static void resampleCalTable(double const calTableIn[XY], CameraMode const &cameraMode,
double calTableOut[XY]);
static void compensateLambdasForCal(double const calTable[XY], double const oldLambdas[XY],
double newLambdas[XY]);
static void addLuminanceToTables(double results[3][Y][X], double const lambdaR[XY], double lambdaG,
double const lambdaB[XY], double const luminanceLut[XY],
double luminanceStrength);
void Alsc::initialise()
{
frameCount2_ = frameCount_ = framePhase_ = 0;
firstTime_ = true;
ct_ = config_.defaultCt;
/* The lambdas are initialised in the SwitchMode. */
}
void Alsc::waitForAysncThread()
{
if (asyncStarted_) {
asyncStarted_ = false;
std::unique_lock<std::mutex> lock(mutex_);
syncSignal_.wait(lock, [&] {
return asyncFinished_;
});
asyncFinished_ = false;
}
}
static bool compareModes(CameraMode const &cm0, CameraMode const &cm1)
{
/*
* Return true if the modes crop from the sensor significantly differently,
* or if the user transform has changed.
*/
if (cm0.transform != cm1.transform)
return true;
int leftDiff = abs(cm0.cropX - cm1.cropX);
int topDiff = abs(cm0.cropY - cm1.cropY);
int rightDiff = fabs(cm0.cropX + cm0.scaleX * cm0.width -
cm1.cropX - cm1.scaleX * cm1.width);
int bottomDiff = fabs(cm0.cropY + cm0.scaleY * cm0.height -
cm1.cropY - cm1.scaleY * cm1.height);
/*
* These thresholds are a rather arbitrary amount chosen to trigger
* when carrying on with the previously calculated tables might be
* worse than regenerating them (but without the adaptive algorithm).
*/
int thresholdX = cm0.sensorWidth >> 4;
int thresholdY = cm0.sensorHeight >> 4;
return leftDiff > thresholdX || rightDiff > thresholdX ||
topDiff > thresholdY || bottomDiff > thresholdY;
}
void Alsc::switchMode(CameraMode const &cameraMode,
[[maybe_unused]] Metadata *metadata)
{
/*
* We're going to start over with the tables if there's any "significant"
* change.
*/
bool resetTables = firstTime_ || compareModes(cameraMode_, cameraMode);
/* Believe the colour temperature from the AWB, if there is one. */
ct_ = getCt(metadata, ct_);
/* Ensure the other thread isn't running while we do this. */
waitForAysncThread();
cameraMode_ = cameraMode;
/*
* We must resample the luminance table like we do the others, but it's
* fixed so we can simply do it up front here.
*/
resampleCalTable(config_.luminanceLut, cameraMode_, luminanceTable_);
if (resetTables) {
/*
* Upon every "table reset", arrange for something sensible to be
* generated. Construct the tables for the previous recorded colour
* temperature. In order to start over from scratch we initialise
* the lambdas, but the rest of this code then echoes the code in
* doAlsc, without the adaptive algorithm.
*/
for (int i = 0; i < XY; i++)
lambdaR_[i] = lambdaB_[i] = 1.0;
double calTableR[XY], calTableB[XY], calTableTmp[XY];
getCalTable(ct_, config_.calibrationsCr, calTableTmp);
resampleCalTable(calTableTmp, cameraMode_, calTableR);
getCalTable(ct_, config_.calibrationsCb, calTableTmp);
resampleCalTable(calTableTmp, cameraMode_, calTableB);
compensateLambdasForCal(calTableR, lambdaR_, asyncLambdaR_);
compensateLambdasForCal(calTableB, lambdaB_, asyncLambdaB_);
addLuminanceToTables(syncResults_, asyncLambdaR_, 1.0, asyncLambdaB_,
luminanceTable_, config_.luminanceStrength);
memcpy(prevSyncResults_, syncResults_, sizeof(prevSyncResults_));
framePhase_ = config_.framePeriod; /* run the algo again asap */
firstTime_ = false;
}
}
void Alsc::fetchAsyncResults()
{
LOG(RPiAlsc, Debug) << "Fetch ALSC results";
asyncFinished_ = false;
asyncStarted_ = false;
memcpy(syncResults_, asyncResults_, sizeof(syncResults_));
}
double getCt(Metadata *metadata, double defaultCt)
{
AwbStatus awbStatus;
awbStatus.temperatureK = defaultCt; /* in case nothing found */
if (metadata->get("awb.status", awbStatus) != 0)
LOG(RPiAlsc, Debug) << "no AWB results found, using "
<< awbStatus.temperatureK;
else
LOG(RPiAlsc, Debug) << "AWB results found, using "
<< awbStatus.temperatureK;
return awbStatus.temperatureK;
}
static void copyStats(bcm2835_isp_stats_region regions[XY], StatisticsPtr &stats,
AlscStatus const &status)
{
bcm2835_isp_stats_region *inputRegions = stats->awb_stats;
double *rTable = (double *)status.r;
double *gTable = (double *)status.g;
double *bTable = (double *)status.b;
for (int i = 0; i < XY; i++) {
regions[i].r_sum = inputRegions[i].r_sum / rTable[i];
regions[i].g_sum = inputRegions[i].g_sum / gTable[i];
regions[i].b_sum = inputRegions[i].b_sum / bTable[i];
regions[i].counted = inputRegions[i].counted;
/* (don't care about the uncounted value) */
}
}
void Alsc::restartAsync(StatisticsPtr &stats, Metadata *imageMetadata)
{
LOG(RPiAlsc, Debug) << "Starting ALSC calculation";
/*
* Get the current colour temperature. It's all we need from the
* metadata. Default to the last CT value (which could be the default).
*/
ct_ = getCt(imageMetadata, ct_);
/*