Age | Commit message (Collapse) | Author |
|
Plumb the dw100 dewarper as a V4L2M2M converter in the rkisp1 pipeline
handler. If the dewarper is found, it is instantiated and buffers are
exported from it, instead of RkISP1Path. Internal buffers are allocated
for the RkISP1Path in case where dewarper is going to be used.
The RKISP1 pipeline handler now supports scaler crop control through
the converter. Register the ScalerCrop control for the cameras created
in the RKISP1 pipeline handler.
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Stefan Klug <stefan.klug@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
|
|
Currently the rkisp1 pipeline handler only registers controls that are
related to the IPA. This patch prepares the rkisp1 pipeline-handler to
register camera controls which are not related to the IPA.
Hence, introduce an additional ControlInfoMap for IPA controls. These
controls will be merged together with the controls in the pipeline
handler (introduced subsequently) as part of updateControls() and
together will be registered during the registration of the camera.
This is similar to what IPU3 pipeline handler handles its controls.
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Stefan Klug <stefan.klug@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
If the converter has cropping capability on its input, the interface
should support it by providing appropriate virtual functions. Provide
Feature::InputCrop in Feature enumeration for the same.
Provide virtual setInputCrop() and inputCropBounds() interfaces so that
the converter can implement its own cropping functionality.
The V4L2M2MConverter implements these interfaces of the Converter
interface. Not all V4L2M2M converters will have cropping ability
on its input, hence it needs to be discovered at construction time.
If the capability to crop is identified successfully, the cropping
bounds are determined during configure() time.
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Stefan Klug <stefan.klug@ideasonboard.com>
|
|
This patch intends to extend the converter interface to have feature
flags, which enables each converter to expose the set of features
it supports.
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
OV7251 is a mono VGA global shutter sensor that has a mainline
driver and works with libcamera.
Add the supporting files for it. The tuning is copied from OV9281.
Signed-off-by: Dave Stevenson <dave.stevenson@raspberrypi.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Add python bindings for quering vendor information from a ControlId.
While at it, update __repr__ so that it also prints the vendor.
Example usage:
>>> cid
libcamera.ControlId(20, libcamera.Saturation, ControlType.Float)
>>> cid.vendor
'libcamera'
Signed-off-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Now that the vendor of the control can be queried, print it in
--list-controls.
Example output:
$ cam -c 1 --list-controls
Using camera platform/vimc.0 Sensor B as cam0
Control: libcamera::Brightness: [-1.000000..1.000000]
Control: libcamera::Contrast: [0.000000..2.000000]
Control: libcamera::Saturation: [0.000000..2.000000]
Signed-off-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Add vendor/namespace information to ControlId, so that the vendor can be
queried from it. This is expected to be used by applications either
simply to display the vendor or for it to be used for grouping in a
UI.
Signed-off-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The delay values should be managed correctly. Not the dealys.
Correct accordingly.
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
This patch allows obtaining a black level from a tuning file in addition
to the camera sensor helper. If both of them define a black level, the
one from the tuning file takes precedence.
The use cases are:
- A user wants to use a different black level, for whatever reason.
- There is a sensor without known gains but with a known black level.
Because a camera sensor helper cannot be defined without specifying
gains, the only way to specify the black level is using the tuning
file. Software ISP uses its fallback gain handling in such a case.
Signed-off-by: Milan Zamazal <mzamazal@redhat.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The black level in software ISP is unconditionally guessed from the
obtained frames. CameraSensorHelper optionally provides the black level
from camera specifications now. Let's use the value if available.
If the black level is not available from the given CameraSensorHelper
instance, it's still determined on the fly.
Signed-off-by: Milan Zamazal <mzamazal@redhat.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Tested-by: Robert Mader <robert.mader@collabora.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Like the hardware pipelines do. Not clearing frameContexts otherwise can
trigger asserts like "Frame context for ... has been overwritten by ..."
when switching between cameras using the swISP, e.g. on phones.
Clearing the configuration and active state will become more important
with upcoming changes such as getting the black level from the camera
helper.
Fixes: 04d171e6b299 ("libcamera: software_isp: Call Algorithm::queueRequest")
Signed-off-by: Robert Mader <robert.mader@collabora.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Milan Zamazal <mzamazal@redhat.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
In the python bindings ControlTypePoint is not handled in the
corresponding conversion functions. Add that.
While at it, sort the listings in the same order as the enum in
controls.h.
Fixes: 200d535ca85f ("libcamera: controls: Add ControlTypePoint")
Signed-off-by: Stefan Klug <stefan.klug@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Sometimes the ISP produces statistics only with a subset of statistic
types being valid. It doesn't happen normally, but was observed in the
wild. Check for the RKISP1_CIF_ISP_STAT_AWB bit to prevent using invalid
or outdated data. As it doesn't happen regularly add an error message to
get notified when it happens.
For simpler code structure, the ColourTemperature metadata entry gets
written unconditionally and overwritten later if needed.
Signed-off-by: Stefan Klug <stefan.klug@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Sometimes the ISP produces statistics only with a subset of statistic
types being valid. It doesn't happen normally, but was observed in the
wild. Check for the RKISP1_CIF_ISP_STAT_AUTOEXP bit to prevent using
invalid or outdated data. As it doesn't happen regularly add an error
message to get notified when it happens.
Signed-off-by: Stefan Klug <stefan.klug@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Fix typo in comment block formatting.
Signed-off-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Alphabetical order of Forward declarations should be maintained hence,
'class V4L2Subdevice' should come after 'class SensorConfiguration'.
Fix it.
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
|
|
The 'found' flag was mistakenly understood that a compatible sensor
format has been found when a sensor configuration is passed in. However,
'found' related to the stream configuration's pixelformat, whether it is
supported by the RkISP1Path video node or not. It does not relate to the
sensor format, hence the check:
if (sensorConfig && !found)
doesn't make sense.
Rectify the above check with:
if (sensorConfig && !rawFormat.isValid())
to ensure a sensor format compatible with sensor configuration has been
set to rawFormat.
Fixes: 047d647452c4 ("libcamera: rkisp1: Integrate SensorConfiguration support")
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
|
|
User-provided sensor configuration is never meant to be altered,
hence pass SensorConfiguration by `const` reference in
RkISP1Path::validate().
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
|
|
Commit 761545407c76 ("pipeline: rkisp1: Filter out sensor sizes not
supported by the pipeline") introduced a mechanism to determine maximum
supported sensor resolution and filter out resolutions that cannot be
supported by the ISP.
However, it missed to update the raw stream configuration path, where
it should have clamped the raw stream configuration size to the maximum
sensor supported resolution.
This patch fixes the above issue and can be confirmed with IMX283
on i.MX8MP:
From:
($) cam -c1 -srole=raw,width=5472,height=3072
INFO Camera camera.cpp:1197 configuring streams: (0) 5472x3648-SRGGB12
ERROR RkISP1 rkisp1_path.cpp:425 Unable to configure capture in 5472x3648-SRGGB12
Failed to configure camera
Failed to start camera session
To:
($) cam -c1 -srole=raw,width=5472,height=3072
INFO Camera camera.cpp:1197 configuring streams: (0) 4096x3072-SRGGB12
cam0: Capture until user interrupts by SIGINT
536.082380 (0.00 fps) cam0-stream0 seq: 000000 bytesused: 25165824
536.182378 (10.00 fps) cam0-stream0 seq: 000001 bytesused: 25165824
536.282375 (10.00 fps) cam0-stream0 seq: 000002 bytesused: 25165824
...
Fixes: 761545407c76 ("pipeline: rkisp1: Filter out sensor sizes not supported by the pipeline")
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: Stefan Klug <stefan.klug@ideasonboard.com>
|
|
Integrate the RkISP1 pipeline handler to support sensor configuration
provided by applications through CameraConfiguration::sensorConfig.
The SensorConfiguration must be validated on both RkISP1Path (mainPath
and selfPath), so the parameters of RkISP1Path::validate() have been
updated to include sensorConfig.
The camera configuration will be marked as invalid when the sensor
configuration is supplied, if:
- Invalid sensor configuration (SensorConfiguration::isValid())
- Bit depth not supported by RkISP1 pipeline
- Sensor configuration output size is larger than maximum supported
sensor's size on RkISP1 pipeline
- No matching sensor configuration output size supplied by the sensor
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: Stefan Klug <stefan.klug@ideasonboard.com>
Tested-by: Stefan Klug <stefan.klug@ideasonboard.com>
|
|
In the case of an AWB search failure, the current algorithm logic will
return a point on the CT curve closest to where the search finisned.
This can be quite undesirable. Instead, add some bias params to the AWB
algorithm which will direct the search to a set CT value in the case
where statistics become unreliable causing the search to fail.
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
A default CT of 4500K is used in a couple of places. Add a constexpr
value for the default CT value and use it instead.
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The documentation of CameraConfiguration::validate() has one
misspelled "CameraConfiguration". Fix it.
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
A minor wording improvement suggested on refactoring review.
Signed-off-by: Milan Zamazal <mzamazal@redhat.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
In many cases a static string literal is used as key. Thus
having the argument type be `const std::string&` is suboptimal
since an `std::string` object needs to be constructed before
the call.
C++17 introduced `std::string_view`, using which the call
can be done with less overhead, as the `std::string_view`
is non-owning and may be passed in registers entirely.
So make `YamlObject::{contains,operator[]}` take the string keys
in `std::string_view`s.
Unfortunately, that is not sufficient yet, because `std::map::find()`
takes an reference to `const key_type`, which would be `const std::string&`
in the case of `YamlParser`. However, with a transparent comparator
such as `std::less<>` `std::map::find()` is able to accept any
object as the argument, and it forwards it to the comparator.
So make `YamlParser::dictionary_` use `std::less<>` as the comparator
to enable the use of `std::map::find()` with any type of argument.
Signed-off-by: Barnabás Pőcze <pobrn@protonmail.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Allow Android HAL adapter to pass the face detection metadata control to
the pipeline and also send face detection metadata to the camera client
if the pipeline generates it.
Signed-off-by: Yudhistira Erlandinata <yerlandinata@chromium.org>
Co-developed-by: Harvey Yang <chenghaoyang@chromium.org>
Signed-off-by: Harvey Yang <chenghaoyang@chromium.org>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: Harvey Yang <chenghaoyang@chromium.org>
|
|
Add FaceDetectMode, FaceDetectFaceRectangles, FaceDetectFaceScores,
and FaceDetectFaceLandmark. Also add ControlTypePoint for supporting
FaceDetectFaceLandmark.
Signed-off-by: Yudhistira Erlandinata <yerlandinata@chromium.org>
Co-developed-by: Becker Hsieh <beckerh@chromium.org>
Co-developed-by: Harvey Yang <chenghaoyang@chromium.org>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: Harvey Yang <chenghaoyang@chromium.org>
|
|
Add a control_type<> specialization for libcamera::Point to allow
storing data of that type in a ControlValue instance.
The new control type will be used by controls introduced in the
next patches.
Signed-off-by: Yudhistira Erlandinata <yerlandinata@chromium.org>
Co-developed-by: Becker Hsieh <beckerh@chromium.org>
Co-developed-by: Harvey Yang <chenghaoyang@chromium.org>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: Harvey Yang <chenghaoyang@chromium.org>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Add a constructor to the Rectangle class that accepts two points.
The constructed Rectangle spans all the space between the two given
points.
Signed-off-by: Yudhistira Erlandinata <yerlandinata@chromium.org>
Co-developed-by: Harvey Yang <chenghaoyang@chromium.org>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: Harvey Yang <chenghaoyang@chromium.org>
|
|
The libcamera::Rectangle class allows defining rectangles regardless of
the orientation of the reference system where a rectangle is used in.
This implies that, depending on the reference system in use, the
rectangle's top-left corner, as defined by libcamera, doesn't correspond
to the visual top-left position.
^
|
| -------------------
| ^ | h
| | |
y| o---->-------------
| w
------------------------------->
(0,0) x
(0,0) x
------------------------------>
| w
y| o---->-------------
| | | h
| v |
| -------------------
|
V
Clarify that a Rectangle's top-left corner corresponds to the point
with the smaller x and y coordinates and that the horizontal and
vertical dimensions are obtained by positive increments along the
corresponding axes.
Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: Harvey Yang <chenghaoyang@chromium.org>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
|
|
As described in the coding style document, libcamera favours <cmath>
over <math.h>. Replace the last few occurrences of the latter with the
former and adapt the code accordingly.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
|
|
As described in the coding style document, libcamera favours <cmath>
over <math.h>. Replace the last few occurrences of the latter with the
former in the Raspberry Pi IPA and adapt the code accordingly. In some
cases, the <math.h> include is simply dropped as it isn't needed.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
|
|
As explained in the coding style document, usage of std::lround() is
preferred over lroundf() as it picks the correct function based on
the argument type. Replace calls to lroundf() with std::lround() through
libcamera.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
|
|
As explained in the coding style document, usage of std::abs() is
preferred over abs() or fabs() as it picks the correct function based on
the argument type. Replace calls to abs() and fabs() with std::abs() in
the Raspberry Pi algorithms.
This fixes a reported warning from clang:
../src/ipa/rpi/controller/rpi/awb.cpp:508:6: error: using integer absolute value function 'abs' when argument is of floating point type [-Werror,-Wabsolute-value]
if (abs(denominator) > eps) {
^
../src/ipa/rpi/controller/rpi/awb.cpp:508:6: note: use function 'std::abs' instead
if (abs(denominator) > eps) {
^~~
std::abs
Reported-by: Maarten Lankhorst <dev@lankhorst.se>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Tested-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
When constructing a ControlValue from an enum value, an explicit cast to
int32_t is needed as we use int32_t as the underlying type for all
enumerated controls. This makes users of ControlValue more complex. To
simplify them, specialize the control_type template for enum types, to
support construction of ControlValue directly without a cast.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The V4L2VideoDevice class implements setSelection() but not
getSelection(). The latter is useful for instance to query crop bounds.
Implement the function.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
|
|
When DNG support is missing, the cam application ignores the .dng suffix
of the file pattern and writes raw binary data instead, without
notifying the user. This leads to confusion. Fix it by printing an error
message.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Milan Zamazal <mzamazal@redhat.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
|
|
Support for DNG capture is conditioned by the availability of libtiff,
which is indicated by the HAVE_TIFF macro set by meson. The dng_writer.h
header then defines HAVE_DNG, which is used in a couple of places to
conditionally compile DNG-related code. Most of the other locations
where conditional compilation is required use HAVE_TIFF.
Using both HAVE_TIFF and HAVE_DNG is confusing. HAVE_DNG would be a
better name, but as the macro is defined in dng_writer.h, it would
require all files that need to test for DNG support to include that
header. Failure to include it (directly or indirectly) would result in
the code covered by the macro to be silently disabled.
To avoid the confusion, standardize on using HAVE_TIFF everywhere and
drop HAVE_DNG.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Milan Zamazal <mzamazal@redhat.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
|
|
It is possible that the maximum sensor size (returned by
CameraSensor::resolution()) is not supported by the pipeline. In such
cases, a filter function is required to filter out resolutions of the
camera sensor, which cannot be supported by the pipeline.
Introduce the filter function filterSensorResolution() in RkISP1Path
class and use it for validate() and generateConfiguration(). The
maximum sensor resolution is picked from that filtered set of
resolutions instead of CameraSensor::resolution().
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Tested-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
|
|
The minResolution_ and maxResolution_ limits are dynamically queried
in populateFormats() from the RkISP1Path video node. Therefore,
initializing these limits with the resizer limits in the constructor is
unnecessary.
This change allows us to remove the hard-coded max/min resolution limits
of the resizer from RkISP1Path, simplifying its constructor further.
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
|
|
Which is not only what many other pipeline handlers use, but also a good
lower limit when dealing with DRM and similar APIs. Even Mesas EGL and
Vulkan WSI implementations use for the reason outlined in mesa commit
992a2dbba80aba35efe83202e1013bd6143f0dba:
> When the compositor is directly scanning out from the application's buffer it
> may end up holding on to three buffers. These are the one that is is currently
> scanning out from, one that has been given to DRM as the next buffer to flip
> to, and one that has been attached and will be given to DRM as soon as the
> previous flip completes. When we attach a fourth buffer to the compositor it
> should replace that third buffer so we should get a release event immediately
> after that. This patch therefore also changes the number of buffer slots to 4
> so that we can accomodate that situation.
Given the popularity of this buffer number the bump should be unlikely
to cause problems. At the same time it may help with performance or
even work around glitches.
The previous number was introduced in commit
a8964c28c80fb520ee3c7b10143371081d41405a without mentioning a specific
reason against the change at hand.
Signed-off-by: Robert Mader <robert.mader@collabora.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The black level is likely to get updated, if ever, only after exposure
or gain changes. Don't compute its possible updates if exposure and
gain are unchanged.
It's probably not worth trying to implement something more
sophisticated. Better to spend the effort on supporting tuning files.
Signed-off-by: Milan Zamazal <mzamazal@redhat.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
This is the last step to fully convert software ISP to Algorithm-based
processing.
The newly introduced frameContext.sensor parameters are set, and the
updated code moved, before calling Algorithm::process() to have the
values up-to-date in stats processing.
Resolves software ISP TODO #10.
Signed-off-by: Milan Zamazal <mzamazal@redhat.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Use the standard libcamera mechanism to report the "current" controls
rather than delaying updates by counting from the last update.
A problem is that with software ISP we cannot be sure about the sensor
delay. The original implementation simply skips exposure updates for 2
frames, which should be enough in most cases. After this change, we
assume the delay being exactly 2 frames, which may or may not be correct
and may work with outdated values if the delay is shorter.
According to Kieran, the wrong parts are also wrong on the
IPU3/RKISP1/Mali pipelines and only RPi have this correct. We need to
fix this, by correctly specifying the gains in the libipa camera sensor
helpers. The sooner the better because this change could introduce a
risk of increasing oscillations.
This patch also prepares moving exposure+gain to an algorithm module
where the original delay mechanism would be a (possibly unnecessary)
complication.
Resolves software ISP TODO #11 + #12.
Signed-off-by: Milan Zamazal <mzamazal@redhat.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
It's more natural to represent color gains as floating point numbers
rather than using a particular pixel-related representation.
double is used rather than float because it's a more common floating
point type in libcamera algorithms. Otherwise there is no obvious
reason to select one over the other here.
The constructed color tables still use integer representation for
efficiency.
Black level still uses pixel (integer) values, for consistency with
other libcamera parts.
Signed-off-by: Milan Zamazal <mzamazal@redhat.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
After black level handling has been moved to an algorithm module, white
balance and the construction of color tables can be moved to algorithm
modules too.
This time, the moved code is split between stats processing and
parameter construction methods. It is also split to two algorithm
modules:
- White balance computation.
- Gamma table computation and color lookup tables construction. While
this applies the color gains computed by the white balance algorithm,
it is not directly related to white balance. And we may want to
modify the color lookup tables in future according to other parameters
than just gamma and white balance gains.
Gamma table computation and color lookup tables construction could be
split to separate algorithms too. But there is no big need for that now
so they are kept together for simplicity.
This is the only part of the software ISP algorithms that sets the
parameters so emitting setIspParams can be moved to prepare() method.
A more natural representation of the gains (and black level) would be
floating point numbers. This is not done here in order to minimize
changes in code movements. It will be addressed in a followup patch.
Signed-off-by: Milan Zamazal <mzamazal@redhat.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Acked-by: Umang Jain <umang.jain@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The black level determination, already present as a separate class, can
be moved to the prepared Algorithm processing structure. It is the
first of the current software ISP algorithms that is called in the stats
processing sequence, which means it is also the first one that should be
moved to the new structure. Stats processing starts with calling
Algorithm-based processing so the call order of the algorithms is
retained.
Movement of this algorithm is relatively straightforward because all it
does is processing stats.
The comment about getting black level from the tuning files is dropped.
The black level will be taken from CameraSensorHelper if available and
that will be implemented in one of the followup patches.
Black level is now recomputed on each stats processing. In a future
patch, after DelayedControls are used, this will be changed to recompute
the black level only after exposure/gain changes.
The black level depends on the sensor used, the computed value can be
reused in a followup capture sessions with the same sensor. Thus it is
sufficient to (re)set the initial value in BlackLevel::init.
Signed-off-by: Milan Zamazal <mzamazal@redhat.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Daniel Scally <dan.scally@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
This patch adds Algorithm::process call for the defined algorithms.
This is preparation only since there are currently no Algorithm based
algorithms defined.
As software ISP currently doesn't produce any metadata, a dummy and
unused metadata instance is created to satisfy Algorithm::process API.
This should be changed in future.
Signed-off-by: Milan Zamazal <mzamazal@redhat.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Daniel Scally <dan.scally@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
This patch adds Algorithm::prepare call for the defined algorithms.
This is preparation only since there are currently no Algorithm based
algorithms defined.
Signed-off-by: Milan Zamazal <mzamazal@redhat.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Daniel Scally <dan.scally@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|