Age | Commit message (Collapse) | Author |
|
The frame context is used to store data used for processing that frame.
It is later used to either act as input for other algorithms or to fill
the metadata. For the colour temperature this is not needed, as the
meatadata shall not contain the value that was active when the image was
processed, but the value that was calculated based on the statistics for
that image. This is no functional change.
Signed-off-by: Stefan Klug <stefan.klug@ideasonboard.com>
Reviewed-by: Daniel Scally <dan.scally@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
When the color gains are set manually it is possible to specify a
gain that wrapped the hardware limits. It would also be possible to
further tune the floating point limits, but that is an error prone
approach. So the limits are imposed on the integers, just before writing
to the hardware. This noticeably reduces some oscillations in the awb
regulation.
Signed-off-by: Stefan Klug <stefan.klug@ideasonboard.com>
Reviewed-by: Daniel Scally <dan.scally@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Add black level value for OV5675 camera sensor.
According to datasheet, default value is 0x10, 10 bits width.
However, Linux kernel driver initializes black level target value
to 0x40. Set the value to the same as in kernel driver, but scaled
to 16 bits.
Signed-off-by: Daniel Semkowicz <dse@thaumatec.com>
Reviewed-by: Quentin Schulz <quentin.schulz@cherry.de>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Open source Qt 5 has been effectively end of life since the release
of Qt 6, and Qt 6 has current LTS releases now.
This change ports qcam to Qt 6.2 and drops some of the baggage related
to Qt 5 that is no longer applicable.
Signed-off-by: Neal Gompa <neal@gompa.dev>
Reviewed-by: Eric Curtin <ecurtin@redhat.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The j721e-csi2rx driver pipeline uses no converters, so enable the
software ISP plugin support. This is handy for boards with AM62 SoC
(like BeaglePlay) that have no HW ISP.
Tested with IMX519 on SK-AM62 running a kernel built with dmabuf heap
support.
Signed-off-by: Jai Luthra <j-luthra@ti.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Tested-by: Milan Zamazal <mzamazal@redhat.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
V4L2VideoDevice is using the caps to determine which kind of buffers to
use with the video-device in 2 different cases:
1. V4L2VideoDevice::open()
2. V4L2VideoDevice::[get|try|set]Format()
And the order in which the caps are checked is different between
these 2 cases. This is a problem for /dev/video# nodes which support
both video-capture and metadata buffers. open() sets bufferType_ to
V4L2_BUF_TYPE_VIDEO_CAPTURE[_MPLANE] in this case, where as
[get|try|set]Format() will call [get|set]FormatMeta() which does not
work with V4L2_BUF_TYPE_VIDEO_CAPTURE[_MPLANE] buffers.
Switch [get|try|set]Format() to use the bufferType_ to determine on what
sort of buffers they should be operating, leaving the V4L2VideoDevice
code with only a single place where the decision is made what sort
of buffers it should operate on for a specific /dev/video# node.
This will also allow to modify open() in the future to take a bufferType
argument to allow overriding the default bufferType it selects for
/dev/video# nodes which are capable of supporting more then 1 buffer type.
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Keep the image aspect ratio when displaying in the viewfinder.
When the window is adjusted to a size that differs in aspect ratio to
the image, keep the image centered in the main window.
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
We have all these neat tuning files. Unfortunately we forgot to install
many of them.
Signed-off-by: Robert Mader <robert.mader@collabora.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The context parameter of the BlackLevelCorrection::init() function is
used. Drop the [[maybe_unused]] attribute.
Fixes: 50c28e135100 ("ipa: rkisp1: blc: Query black levels from camera sensor helper")
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Stefan Klug <stefan.klug@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
|
|
Enable the simple pipeline handler with software ISP for the IPU6 now
that the IPU6 CSI2 receiver (aka the isys driver) has landed in
media_staging/master.
Signed-off-by: Dennis Bonke <admin@dennisbonke.com>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Move black levels for tuning files that contained a BLC block into
the camera sensor helpers.
ov4689.yaml had 66@12bit while the datasheet states 64@12bit. Use the
value from the datasheet (scaled to 16bit).
ov5640.yaml had 256@12bit while the datasheet states 16@10bit. Looking
at the commit message the 256 most likely stems from the imx219 tuning
file and 16@10bit is the same as the 64@12bit from the ov4689. This
seems more likely and is therefore used.
Signed-off-by: Stefan Klug <stefan.klug@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The black levels for imx219 and imx258 are now contained in the camera
sensor helpers. Remove them from the tuning file for the imx219. Add a
BLC entry to the imx258 tuning file.
Signed-off-by: Stefan Klug <stefan.klug@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Add sensor black levels to the metadata of the rkisp1 pipeline.
Additionally enable raw support for this algorithm and add it to
uncalibrated.yaml, so that black levels get reported when capturing
tuning images. This is a bit of a hack, because no actual black level
correction is taking place in raw mode, but it is the easiest way to get
blacklevel reported for raw streams.
Signed-off-by: Stefan Klug <stefan.klug@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
As the camera sensor helper now has the ability to provide the black
level, use it. Black levels can still be overwritten by the tuning
file, but the direction is to remove them from the tuning files and move
them into the sensor helpers.
Additionally interpret all values based on 16bits. The conversion to the
scale required by the hardware is done in process(). It ensures all the
values inside libcamera are the same scale and is in preparation for the
i.MX8MP where black levels are based on a 20bit scale. Note that this
breaks existing tuning files. The tuning files distributed with
libcamera will be fixed in a later patch.
Signed-off-by: Stefan Klug <stefan.klug@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
To be able to query the black levels, the black level correction
algorithm needs access to the camera sensor helper. Allow this by moving
the camHelper_ member from IPARkISP1 into IPAContext.
Signed-off-by: Stefan Klug <stefan.klug@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
|
|
For a proper tuning process we need to know the sensor black levels. In
most cases these are fixed and not reported by the kernel driver. Store
them inside the sensor helpers for later retrieval by the algorithms.
Add black level value corresponding to the data pedestal for three
initial sensors as documented in the datasheets. More should be added,
eventually filling the gaps for all supported sensors.
Signed-off-by: Stefan Klug <stefan.klug@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The converter interface uses the unsigned int output stream index to map
to the output frame buffers. This is cumbersome to implement new
converters because one has to keep around additional book keeping
to track the streams with their correct indexes.
The v4l2_converter_m2m and simple pipeline handler are adapted to
use the new interface. This work roped in software ISP as well,
which also seems to use indexes (although it doesn't implement converter
interface) because of a common conversionQueue_ queue used for
converter_ and swIsp_.
The logPrefix is no longer able to generate an index from a stream, and
is updated to be more expressive by reporting the stream configuration
instead, for example, reporting "1920x1080-MJPEG" in place of
"stream0".
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Tested-by: Andrei Konovalov <andrey.konovalov.ynk@gmail.com> # sm8250 RB5
|
|
Rename the private Stream class from V4L2M2MConverter::Stream to
V4L2M2MConverter::V4L2M2MStream. This is done to improve readability
of the code when we drop the handling of stream by indexes in a
subsequent patch.
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
|
|
Currently the soft-isp outputs a single output stream. Hence,
drop the unnecessary check for stream indexes.
Another reason to drop is actually the stream indexes is meant to be
unique in outputs std::map<>, hence checking for unique stream indexes
is redundant.
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
|
|
The streams sanity check tries to determine if all the stream indexes
passed in outputs std::map<> are unique. However, since the data
container is std::map<>, all its keys (stream indexes in this case),
are already unique.
Instead, rectify the sanity check to ensure all the framebuffers passed
in the outputs std::map<> are unique to each index. Hence, no two stream
indexes should have same framebuffer. Update the comment to reflect
the change.
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
|
|
The 16-bit padded raw 10 and raw 12 formats are stored in memory in
little endian order, regardless of the machine's endianness. Read pixel
data as uint8_t values and hardcode bit shifting to little endian to fix
scanline packing.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Stefan Klug <stefan.klug@ideasonboard.com>
|
|
The 16-bit padded raw 10 and raw 12 formats are stored in memory in
little endian order, regardless of the machine's endianness. Swap the
16-bit values on big-endian machines when reading pixels from memory to
generate thumbnails.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Stefan Klug <stefan.klug@ideasonboard.com>
|
|
Add support for RAW10 and RAW12 to the dng_writer. This is needed on
imx8mp to produce tuning images. Both formats were tested on a debix
som with a imx335.
Signed-off-by: Stefan Klug <stefan.klug@ideasonboard.com>
Reviewed-by: Daniel Scally <dan.scally@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Add a thumbnail function for raw formats that are 16bit aligned.
This is needed for the upcoming RAW10 and RAW12 implemntation.
Use the new function for RAW16 as the thumbScanlineRaw_CSI2P produces
incorrect results for that format (it averages over adjacent bytes,
which works for the CSI formats).
Signed-off-by: Stefan Klug <stefan.klug@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The old names lead to confusions. Rename to better express the intent.
Signed-off-by: Stefan Klug <stefan.klug@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Daniel Scally <dan.scally@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Add support for RAW16 formats to the DNGWriter helpers so that we can
produce dng files from the mali-c55.
Signed-off-by: Daniel Scally <dan.scally@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Stefan Klug <stefan.klug@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The gcc used in my current buildroot (Version 12.3) errors out with
-Wmaybe-uninitialized. Fix that.
Signed-off-by: Stefan Klug <stefan.klug@ideasonboard.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
|
|
In libtiff version 4.5.1 and later the CFA* tags were missing. This got
fixed in https://gitlab.com/libtiff/libtiff/-/commit/49856998c3d82e65444b47bb4fb11b7830a0c2be
Unfortunately the fix is not released yet, but the faulty libtiff is
contained in current buildroot. As a local fix is pretty easy and
without side effects, let's workaround that.
Signed-off-by: Stefan Klug <stefan.klug@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Multiple local functions are defined in the global namespace without the
static keyword. This compiles fine for now, but will cause a missing
declaration warning when we enable them. To prepare for that, move the
function declaration to an anonymous namespace.
While at it, for consistency, include an existing static function in the
namespace and drop the static keyword.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
_FORTIFY_SOURCE redirects the open*() calls to __open*_2() functions.
The libcamera V4L2 adaptation layer intercepts those functions to
support applications compiled with _FORTIFY_SOURCE. When _FORTIFY_SOURCE
is not enabled, the C library headers will not provide declarations for
the fortified functions, which will cause missing declaration warnings
when we unable them.
Fix this by disabling the -Wmissing-declarations warnings selectively
for the _FORTIFY_SOURCE functions. To avoid sparkling pragmas around,
move the relevant function definitions next to each other.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The close() and ioctl() functions are declared in the unistd.h and
sys/ioctl.h headers. Include them to provide the declarations.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The init_py_*() functions are called by the top-level entry point of the
libcamera Python module to initialize different parts of the bindings.
They are declared in py_main.cpp where they are called, and defined in
separate compilation units. This results in functions being defined
without a corresponding declaration, and will generate warnings when we
enable -Wmissing-declarations.
Fix this by moving the function declarations from py_main.c to
py_main.h, and including py_main.h in the various compilation units that
need it.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Multiple local functions are defined in the global namespace without the
static keyword. This compiles fine for now, but will cause a missing
declaration warning when we enable them. To prepare for that, move the
function declaration to an anonymous namespace.
While at it, for consistency, include an existing static function in the
namespace and drop the static keyword.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
|
|
This commit moves the check that determines whether the mode argument of
`open*()` exists into a separate function.
With that, the check is fixed because previously it failed to account
for the fact that `O_TMPFILE` is not a power of two.
Furthermore, add `assert()`s in the fortified variants that ensure that
no mode is required by the specified flags.
Signed-off-by: Barnabás Pőcze <pobrn@protonmail.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
To avoid confusion, have `__open64_2()` and `__openat64_2()` delegate to
`open64()` and `openat64()`, respectively, instead of `open()` and
`openat()`.
This does not change the behaviour because
`V4L2CompatManager::instance()->openat()` calls `openat64()` internally,
and that adds the `O_LARGEFILE` flag unconditionally.
Fixes: 1023107b6405 ("v4l2: v4l2_compat: Intercept open64, openat64, and mmap64")
Signed-off-by: Barnabás Pőcze <pobrn@protonmail.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The matrixVlidateYaml() function is declared in the libcamera::ipa::
namespace, but defined in the libcamera:: namespace. This causes a
dynamic linking error at runtime. Fix it by moving the function
definition.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Tested-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
|
|
The YamlObject::get<T>() function template has a specialization for
double but not for float. When used in an IPA module, the issue is
caught at module load time only, when dynamic links are resolved,
causing errors such as
Failed to open IPA module shared object: /usr/lib/libcamera/ipa_rkisp1.so: undefined symbol: _ZNK9libcamera10YamlObject6GetterIfE3getERK_
Fix it by adding a float specialization. The alternative would be to use
double only in IPA modules, but the lack of enforcement at compile time
makes this dangerous.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Tested-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: Stefan Klug <stefan.klug@ideasonboard.com>
|
|
The frame context agc.update variable is used to indicate if the ISP
histogram metering parameters need to be updated. Rename it to
updateMetering to make usage more explicit.
Suggested-by: Paul Elder <paul.elder@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
In order to be more compatible with modern hardware and APIs. This
notably allows GL implementations to directly import the buffers more
often and seems to be required for Wayland.
Further more, as we already enforce a 8 byte stride, these formats work
better for clients that don't support padding - such as libwebrtc at the
time of writing.
Tested devices:
- Librem5
- PinePhone
- Thinkpad X13s
Signed-off-by: Robert Mader <robert.mader@collabora.com>
Tested-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Milan Zamazal <mzamazal@redhat.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
All users of the Pwl::readYaml() function have been removed. The
function is not used, and is deprecated in favour of YamlObject::get().
Drop it.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Now that deserializing a Pwl object from YAML data is possible using the
YamlObject::get() function, replace all usage of Pwl::readYaml() to
prepare for its removal.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Tested-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> # On Raspberry Pi 4
|
|
The AGC algorithm implements the AeEnable control at runtime. Move the
declaration of the control from the IPA module to the algorithm.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The sensor's maximum shutter speed is clamped by the maximum frame
duration specified in requests. If the requested maximum frame duration
is lower than the sensor's minimum shutter speed, the Agc::process()
function will pass a minimum value higher than the maximum to the
setLimits() function, resulting in an assertion failure. Fix it by
clamping the value to both the lower and the upper bounds.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The AGC active state and frame context both contain a variable named
maxShutterSpeed. The variable is used to limit the maximum shutter speed
when computing the exposure time and gains, but stores the maximum frame
duration, not clamped by the sensor's maximum shutter speed. Rename it
to maxFrameDuration.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
|
|
The effective exposure value for each frame is split into shutter time,
analog gain and digital gain based on the AGC constraint mode and
exposure mode. The algorithm uses the modes from the active state, which
tracks the latest queued request, instead of the frame context, which
tracks the value of the controls requested for that frame. Fix it by
using the correct modes.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
|
|
The condition
if (std::pow(std::floor(root), 2) < factor)
predivider = static_cast<uint8_t>(std::ceil(root));
else
predivider = static_cast<uint8_t>(std::floor(root));
can only be false when the factor's root is an integer. In that case,
std::ceil(root) and std::floor(root) will be equal. The computation can
thus be simplified by always rounding up.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The ISP histogram parameters depends on the AE metering mode, but not on
the other AE algorithm controls. The exposure mode, constraints mode and
frame duration limits influence the behaviour of the algorithm, but not
the histogram computation parameters. Update the histogram parameters
only when AE metering mode changes.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
|
|
The Agc::computeHistogramPredivider() function doesn't need to modify
its size parameter. Make it const.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
|
|
The IPAFrameContext AGC documentation is lagging behind the
implementation and misses many variables. Document them.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
|
|
The IPAActiveState AGC documentation is lagging behind the
implementation and misses many variables. Document them.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
|