Age | Commit message (Collapse) | Author |
|
This commit implements renegotiation of the camera configuration and
source pad caps. A renegotiation can happen when a downstream element
decides to change caps or the pipeline is dynamically changed.
To handle a renegotiation the GST_FLOW_NOT_NEGOTIATED return value has
to be handled in GstLibcameraSrcState::processRequest(). Otherwise the
default would be to print an error and stop streaming.
To archive this in a clean way the if statement is altered into a switch
statement which now also has a case for GST_FLOW_NOT_NEGOTIATED. In the
case of GST_FLOW_NOT_NEGOTIATED every source pad is checked for the
reconfiguration flag with gst_pad_needs_reconfigure() which does not
clear this flag. If at least one pad requested a reconfiguration the
function returns without an error and the renegotiation will happen
later in the running task. If no pad requested a reconfiguration then
the function will return with an error.
In gst_libcamera_src_task_run() the source pads are checked for the
reconfigure flag by calling gst_pad_check_reconfigure() and if one pad
returns true and the caps are not sufficient anymore then the
negotiation is triggered. It is fine to trigger the negotiation after
only a single pad returns true for gst_pad_check_reconfigure() because
the reconfigure flags are cleared in the gst_libcamera_src_negotiate()
function.
If any pad requested a reconfiguration the following will happen:
1. The camera is stopped because changing the configuration may not
happen while running.
2. The completedRequests queue will be cleared by calling
GstLibcameraSrcState::clearRequests() because the completed buffers
have the wrong configuration.
3. The new caps are negotiated by calling gst_libcamera_src_negotiate().
When the negotiation fails streaming will stop.
4. The camera is started again.
Signed-off-by: Jaslo Ziska <jaslo@ziska.de>
Reviewed-by: Nicolas Dufresne <nicolas.dufresne@collabora.com>
Tested-by: Nicolas Dufresne <nicolas.dufresne@collabora.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Add a clearRequests() function to GstLibcameraSrcState which clears the
GstLibcameraSrcState::completedRequests_ queue.
Use this new function in gst_libcamera_src_task_leave() instead of doing
it manually.
Signed-off-by: Jaslo Ziska <jaslo@ziska.de>
Reviewed-by: Nicolas Dufresne <nicolas.dufresne@collabora.com>
Tested-by: Nicolas Dufresne <nicolas.dufresne@collabora.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Move the code which negotiates all the source pad caps into a separate
function called gst_libcamera_src_negotiate(). When the negotiation
fails this function will return false and true otherwise.
Use this function instead of doing the negotiation manually in
gst_libcamera_src_task_enter() and remove the now redundant error
handling code accordingly.
Signed-off-by: Jaslo Ziska <jaslo@ziska.de>
Reviewed-by: Nicolas Dufresne <nicolas.dufresne@collabora.com>
Tested-by: Nicolas Dufresne <nicolas.dufresne@collabora.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
We make a few small improvements to the code:
* The arrayToSet method is prevented from overwriting the end of the
array if there are too many values in the input table. If you supply
a table, it will force you to put the correct number of elements in
it.
* The arrayToSet and setStrength member functions are turned into
static functions. (There may be a different public setStrength
member function in future.)
* When no tables at all are given, the configuration is flagged as
being disabled, so that we can avoid copying tables full of zeroes
around. As a consequence, the pipeline handler too will disable this
hardware block rather than run it needlessly. (Note that the tuning
tool will put in a completely empty "rpi.cac" block if no CAC tuning
images are supplied, benefiting from this behaviour.)
* The initialise member function is removed as it does nothing.
Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The recent change where time-filtering is done before sorting out the
digital gain means that the target exposure without digital gain is no
longer set, breaking the 'AeLocked' calculation.
We can use the regular (full) target exposure instead.
Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Fixes: 84b6327789fc ("ipa: rpi: agc: Filter exposures before dealing with digital gain")
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Acked-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Label draft controls and properties through the "draft" vendor tag
and deprecate the existing "draft: true" mechanism. This uses the new
vendor tags mechanism to place draft controls in the same
libcamera::controls::draft namespace and provide a defined control id
range for these controls. This requires moving all draft controls from
control_ids.yaml to control_ids_draft.yaml.
One breaking change in this commit is that draft control ids also move
to the libcamera::controls::draft namespace from the existing
libcamera::controls namespace. This is desirable to avoid API breakages
when adding new libcamera controls. So, for example, the use of
controls::NOISE_REDUCTION_MODE will need to be replaced with
controls::draft::NOISE_REDUCTION_MODE.
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Add a new control_ranges.yaml file that is used to reserve control id
ranges/offsets for libcamera and vendor specific controls. This file is
used by the gen-controls.py script to generate control id values for
each control.
Draft controls now have a separate range from core libcamera controls,
breaking the existing numbering behaviour.
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
|
|
Add support for using separate YAML files for controls and properties
generation. The mapping of vendor/pipeline handler to control file is
done through the controls_map variable in include/libcamera/meson.build.
This simplifies management of vendor control definitions and avoids
possible merge conflicts when changing the control_ids.yaml file for
core and draft controls. With this change, libcamera and draft controls
and properties files are designated the 'libcamera' vendor tag.
In this change, we also rename control_ids.yaml -> control_ids_core.yaml
and property_ids.yaml -> property_ids_core.yaml to designate these as
core libcamera controls.
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The template file to the gen-controls.py and gen-py-controls.py is now
passed in through the '-t' or '--template' command line argument instead
of being a positional argument. This will allow multiple input files to
be provided to the scripts in a future commit.
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Add support for vendor-specific controls and properties to libcamera.
The controls/properties are defined by a "vendor" tag in the YAML
control description file, for example:
vendor: rpi
controls:
- MyExampleControl:
type: string
description: |
Test for libcamera vendor-specific controls.
This will now generate a control id in the libcamera::controls::rpi
namespace, ensuring no id conflict between different vendors, core or
draft libcamera controls. Similarly, a ControlIdMap control is generated
in the libcamera::controls::rpi namespace.
A #define LIBCAMERA_HAS_RPI_VENDOR_CONTROLS is also generated to allow
applications to conditionally compile code if the specific vendor
controls are present. For the python bindings, the control is available
with libcamera.controls.rpi.MyExampleControl. The above controls
example applies similarly to properties.
Existing libcamera controls defined in control_ids.yaml are given the
"libcamera" vendor tag.
A new --mode flag is added to gen-controls.py to specify the mode of
operation, either 'controls' or 'properties' to allow the code generator
to correctly set the #define string.
As a drive-by, sort and redefine the output command line argument in
gen-controls.py and gen-py-controls.py to ('--output', '-o') for
consistency.
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Fix -Wdeprecated-this-capture error when building with c++20 by
explicity naming this in the capture.
Signed-off-by: Brett Brotherton <bbrotherton@google.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Tested-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Commit fd84180d7a09 ("gstreamer: Implement element EOS handling") has
introduced a compilation warning with clang:
../../src/gstreamer/gstlibcamerasrc.cpp:768:23: error: unused variable 'oldEvent' [-Werror,-Wunused-variable]
g_autoptr(GstEvent) oldEvent = self->pending_eos.exchange(event);
^
This seems to be a false positive, but nonetheless breaks the build. Fix
it.
Fixes: fd84180d7a09 ("gstreamer: Implement element EOS handling")
Signed-off-by: Jaslo Ziska <jaslo@ziska.de>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
|
|
The algorithm computes R/G and B/G colour ratio statistics which we
should not allow to go to zero because there is clearly no gain you
could apply to R or B to equalise them. Instead flag such regions as
having "insufficient data" in the normal manner.
Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
This commit implements EOS handling for events sent to the libcamerasrc
element by the send_event method (which can happen when pressing
Ctrl-C while running gst-launch-1.0 -e, see below). EOS events from
downstream elements returning GST_FLOW_EOS are not considered here.
To archive this add a function for the send_event method which handles
the GST_EVENT_EOS event. This function will set an atomic to the
received event and push this EOS event to all source pads in the running
task.
Also set the GST_ELEMENT_FLAG_SOURCE flag to identify libcamerasrc as a
source element which enables it to receive EOS events sent to the
(pipeline) bin containing it. This in turn enables libcamerasrc
to receive EOS events, for example, from gst-launch-1.0 with
the -e (--eos-on-shutdown) flag applied.
Bug: https://bugs.libcamera.org/show_bug.cgi?id=91
Signed-off-by: Jaslo Ziska <jaslo@ziska.de>
Acked-by: Nicolas Dufresne <nicolas.dufresne@collabora.com>
Reviewed-by: Nicolas Dufresne <nicolas.dufresne@collabora.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
|
|
Add a bunch of logging messages that have come in handy debugging
various issues with the pipeline handler code.
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Correct a crash in CameraSensor::init() when trying to set the
V4L2_CID_HBLANK control on sensor not implementing this control. The
HBLANK sensor not being mandatory for non-RAW sensors, it can happen
that the sensor does not expose this control. Perform check against
availability of the control prior to usage in order to avoid the crash.
Signed-off-by: Alain Volmat <alain.volmat@foss.st.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
If no ISP output streams are configured, the ISP output count is skipped
for the the low res stream, and causes the drop frame logic to fail
because of a count mismatch. This in-turn stops any requests from
completing correctly.
Fix this by ensuring the low res output is counted correctly when no ISP
output streams are configured.
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The entityControls variable is unused, remove it.
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
We add an HdrMode control (to enable and disable HDR processing)
and an HdrChannel, which indicates what kind of HDR frame (short, long
or medium) has just arrived.
Currently the HdrMode supports the following values:
* Off - no HDR processing at all.
* MultiExposureUnmerged - frames at multiple different exposures are
produced, but not merged together. They are returned "as is".
* MultiExposure - frames at multiple different exposures are merged
to create HDR images.
* SingleExposure - multiple frames all at the same exposure are
merged to create HDR images.
* Night - multiple frames will be combined to create "night mode"
images.
Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
We need to be able to do things like enable/disable AGC for all the
channels, so most of the AGC controls are updated to be applied to all
channels. There are a couple of exceptions, such as setting explicit
shutter/gain values, which apply only to channel 0.
Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
AWB writes this out during prepare, so we may as well read it in AGC
prepare as well. Reading it in process is wrong on the PiSP platform
because process runs before prepare, so the AWB status won't be there
(on vc4 it made no difference).
Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Since noise control handling differs between the VC4 and PiSP IPAs,
move the current denoise control handler from ipa base into the vc4 IPA
derived class.
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
"Fast desaturation" is a technique that can help the AGC algorithm to
desaturate images more quickly when they are very
over-exposed. However, it uses digital gain to do this which can
confuse our HDR techniques. Therefore make it optional.
Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
This was being re-read in order to determine what LSC gains had been
applied. We can just retrieve these numbers from the prevAsyncResults_
instead.
This will also enable other future algorithms to manipulate the LSC
tables in the alsc.status, without it breaking the core ALSC algorithm
here.
Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
We can perform some of the local contrast adjustment using global
gains in the LSC table. We can vary the amount of gain according to
the measured brightness of that image region.
Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Now that the transformFromOrientation() function isn't used outside of
transform.cpp, make it static to remove it from the public API.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
|
|
The transformToOrientation() function is called from
Orientation operator*(const Orientation &o, const Transform &t);
only. Fold the code in the caller and drop the transformToOrientation()
function to drop what can be considered as an ill-defined operation from
the API.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
|
|
The cached rotationTransform_ value is used in computeTransform() only,
to compute the mounting orientation. Cache the mounting orientation
instead, removing the need for the intermediate conversion of the
rotation to a transform.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
|
|
Add a '--orientation|-o' option to the Python version of the cam test
application to set an orientation to the image stream.
Supported values are:
- rot0: no rotation
- rot180: rotate 180 degrees
- flip: vertical flip
- mirror: horizontal flip
Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Add a '--orientation|-o' option to the cam test application to set
an orientation to the image stream.
Supported values are the ones obtained by applying flips to the camera
sensor:
- rot0: no rotation
- rot180: rotate 180 degrees
- flip: vertical flip
- mirror: horizontal flip
Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Replace the usage of CameraConfiguration::transform with the newly
introduced CameraConfiguration::orientation.
Rework and rename the CameraSensor::validateTransform(transform) to
CameraSensor::computeTransform(orientation), that given the desired
image orientation computes the Transform that pipeline handlers should
apply to the sensor to obtain it.
Port all pipeline handlers to use the newly introduced function.
This commit breaks existing applications as it removes the public
CameraConfiguration::transform in favour of
CameraConfiguration::orientation.
Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Define an enumeration type for Orientation and expose the
CameraConfiguration::orientation property in place of
CameraConfiguration::transform.
Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: Tomi Valkeinen <tomi.valkeinen@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Add two operations that allows to combine Transform with Orientation.
- Transform operator/(const Orientation &o1, const Orientation &o2)
allows to easily get back the Transform that needs to be applied to
Orientation2 to get Orientation1
- Orientation operator*(const Orientation &o, const Transform &t)
allows to apply a Transform to an Orientation and obtain the
combination of the two
These two operations allow applications to use Transforms to
manipulate the Orientation inside the CameraConfiguration, if they
wish.
Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The current definition of operator*(Transform t1, Transform t0) follows
the function composition notion, where t0 is applied first then t1 is
applied last.
In order to introduce operator*(Orientation, Transform) where a
Transform is applied on top of an Orientation, invert the operand order
of operator*(Transform, Transform) so that usage of operator* with both
Orientation and Transform can be made associative.
For example:
Orientation o;
Transform t = t1 * t2
Orientation o1 = o * t
= o * (t1 * t2) = (o * t1) * t2 = o * t1 * t2
Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Add two helper functions to the transform.cpp file that allows to
convert to and from an Orientation.
Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Specify in the documentation that properties::Rotation specifies the
mounting rotation of the camera module. This avoids confusion with the
image orientation which is instead expressed by
CameraConfiguration::orientation.
For this reason, do not compensate the Rotation property when
initializing the CameraSensor class but report the value of
V4L2_CID_CAMERA_SENSOR_ROTATION or 0 if the control is not available.
Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Add figures in Documentation/rotation/ to document the plane
transformations defined by the Orientation enumeration.
Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Introduce the Orientation enumeration which describes the possible 2D
transformations that can be applied to an image using two basic plane
transformations.
Add to the CameraConfiguration class a new member 'orientation' which is
used to specify the image orientation in the memory buffers delivered to
applications.
The enumeration values follow the ones defined by the EXIF specification at
revision 2.32, Tag 274 'orientation'.
The newly introduced field is meant to replace
CameraConfiguration::transform which is not removed yet not to break
compilation.
Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The rotationTransform_ depends on a V4L2 control whose value does not
change for the whole lifetime of the camera.
Instead of re-calculating it everytime the camera is configured, cache
it at properties initialization time.
Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Qt supports RGB565 natively; add support for the format by mapping
the libcamera format to Qt's representation of it.
Signed-off-by: Daniel Scally <dan.scally@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
If the pipeline runs out of embedded data buffers, then it will pass
the frame to the IPA without the metadata. The IPA then has to use the
delayed controls as inputs to the algorithms. This can cause problems
with the subsequent algorithms if the sensor did not action the
controls, especially with the autofocus as that doesn't have controls
which can be passed in lieu of the metadata.
Reduce the likelihood of this by increasing the number of embedded data
buffers, as they are small so a generous number can be allocated.
Signed-off-by: William Vinnicombe <william.vinnicombe@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Whenever the AGC active channels are changed, start with the first
channel listed. This allows applications to rely on a particular channel
being generated first. For example, multi-exposure HDR always wants the
short channel first.
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The code was inadvertently overwriting the caller's StatisticsPtr,
meaning that subsequent algorithms would get the wrong image
statistics when AGC channels changed.
This could be fix using std::ref, though I find the C-style pointer
fix easier to understand!
Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Some use cases may require stronger, or different, denosie settings to
others. For example, the way frames are accumulated during single
exposure HDR means that we may want stronger denoise.
This commit adds such support for different configurations that can be
defined in the tuning file.
Older tuning files, or files where there is only a single
configuration, load only the "normal" denoise configuration.
Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The enableCe() function enables or disables adaptive contrast
enhancement and the restoreCe() function sets it back to its normal
state (which is what was read from the tuning file).
In future, algorithms like HDR might want to take over tonemapping
functions, so any dynamic behaviour here would upset them.
Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Add a small "stable region" parameter (defaulting to 2%) within which
the AGC will not adjust the exposure it requests. It allows
applications to configure the AGC to avoid continual micro-adjustments
of exposure values if they are somehow sensitive to it.
Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
This allows them to be accessed by the pipeline handlers when needed.
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Move the handling of Bayer order changes due to flips to run before
platformValidate(). This removes the need for this code to be split
between platformValidate() and validate() as it is right now.
Also add some validation to ensure the vc4 pipeline handler only
supports CSI2 packing or no packing.
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Record if additional software downscaling is needed for a particular
stream in the RPi::Stream class. Additional software downscaling may be
needed if the user required downscale factor is greater than what the
ISP hardware is capable of.
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Add new CAC, HDR, Saturation and Tonemapping algorithms.
Add a new Denoise algorithm that handles spatial/temporal/colour denoise
through one interface. With this change, the old SDN algorithm is now
considered deprecated and a warning message will be displayed if it is
enabled.
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|