Age | Commit message (Collapse) | Author |
|
RkISP actually supports two modes for color means, RGB and YCbCr. The
variables where the means are stored are identically named regardless of
the color means mode that's been selected.
Since the gains are computed in RGB mode, a conversion needs to be done
when the mode is YCbCr, which is unnecessary when RGB mode is selected.
This adds support for RGB means mode too, by checking at runtime which
mode is selected at a given time. The default is still set to YCbCr mode
for now.
Cc: Quentin Schulz <foss+libcamera@0leil.net>
Signed-off-by: Quentin Schulz <quentin.schulz@theobroma-systems.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
The color temperature doesn't need floating point precision, and is
calculated by Awb::estimateCCT() as an unsigned integer. Store it with
the same data type in the frame context.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
The AWB statistics are computed after the ISP applies the colour gains.
This means that the red, green and blue means do not match the data
coming directly from the sensor, but are multiplied by the colour gains
that were used for the frame on which the statistics have been computed.
The AWB algorithm needs to take this into account when calculating the
colour gains for the next frame. Do so by dividing the means by the
gains that were applied to the frame, retrieved from the frame context.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
Now that data used by algorithms has been partitioned between the active
state and frame context, we have a better view of the role of each of
those structures. Document them appropriately.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
Rework the algorithm's usage of the active state, to store the value of
controls for the last queued request in the queueRequest() function, and
store a copy of the values in the corresponding frame context. The
latter is used in the prepare() function to populate the ISP parameters
with values corresponding to the right frame.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
Rework the algorithm's usage of the active state, to store the value of
controls for the last queued request in the queueRequest() function, and
store a copy of the values in the corresponding frame context. The
latter is used in the prepare() function to populate the ISP parameters
with values corresponding to the right frame.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
Rework the algorithm's usage of the active state, to store the value of
controls for the last queued request in the queueRequest() function, and
store a copy of the values in the corresponding frame context. The
latter is used in the prepare() function to populate the ISP parameters
with values corresponding to the right frame.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
Rework the algorithm's usage of the active state and frame context to
store data in the right place.
The active state stores two distinct categories of information:
- The consolidated value of all algorithm controls. Requests passed to
the queueRequest() function store values for controls that the
application wants to modify for that particular frame, and the
queueRequest() function updates the active state with those values.
The active state thus contains a consolidated view of the value of all
controls handled by the algorithm.
- The value of parameters computed by the algorithm when running in auto
mode. Algorithms running in auto mode compute new parameters every
time statistics buffers are received (either synchronously, or
possibly in a background thread). The latest computed value of those
parameters is stored in the active state in the process() function.
The frame context also stores two categories of information:
- The value of the controls to be applied to the frame. These values are
typically set in the queueRequest() function, from the consolidated
control values stored in the active state. The frame context thus
stores values for all controls related to the algorithm, not limited
to the controls specified in the corresponding request, but
consolidated from all requests that have been queued so far.
For controls that can be specified manually or computed by the
algorithm depending on the operation mode (such as the colour gains),
the control value will be stored in the frame context in
queueRequest() only when operating in manual mode. When operating in
auto mode, the values are computed by the algorithm and stored in the
frame context in prepare(), just before being stored in the ISP
parameters buffer.
The queueRequest() function can also store ancillary data in the frame
context, such as flags to indicate if (and what) control values have
changed compared to the previous request.
- Status information computed by the algorithm for a frame. For
instance, the colour temperature estimated by the algorithm from ISP
statistics calculated on a frame is stored in the frame context for
that frame in the process() function.
The active state and frame context thus both contain identical members
for most control values, but store values that have a different meaning.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Rework the algorithm's usage of the active state to store the value of
controls for the last queued request in the queueRequest() function, and
store a copy of the values in the corresponding frame context.
The frame context is used in the prepare() function to populate the ISP
parameters with values corresponding to the right frame.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
Now that the Algorithm::prepare() function takes a frame number, we can
use it to replace the IPAActiveState::frameCount member.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
Establish a queue of FrameContexts using the new FCQueue and use it to
supply the FrameContext to the algorithms.
The algorithms on the RKISP1 do not use this yet themselves, but are
able to do so after the introduction of this patch.
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Inherit from the base FrameContext class in the RkISP1 IPAFrameContext.
As the IPAFrameContext is currently unused, this change is a no-op, but
it prepares the RkISP1 IPA module for frame context queue support.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
The RkISP1 IPA module creates a single instance of its IPAFrameContext
structure, effectively using it more as an active state than a per-frame
context. To prepare for the introduction of a real per-frame context,
move all the members of the IPAFrameContext structure to a new
IPAActiveState structure. The IPAFrameContext becomes effectively
unused at runtime, and will be populated back with per-frame data after
converting the RkISP1 IPA module to using a frame context queue.
The IPAActiveState structure will slowly morph into a different entity
as individual algorithm get later ported to the frame context API.
While at it, fix a typo in the documentation of the
Agc::computeExposure() function that incorrectly refers to the frame
context instead of the global context.
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
The documentation of the IPA context structures is separate from the
documentation of the structure members. Sort the documentation block to
group members with their structure.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
The "autoExposure" class member is not used.
Remove it.
Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Call the Algorithm::queueRequest() function of all algorithms when a
request is queued, to pass the request controls to the algorithms. We
can now drop the copy of the control list stored in IPAFrameContext as
it isn't used anymore.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
Replace the manual ring buffer implementation with the FCQueue class
from libipa.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
Inherit from the base FrameContext class in the IPU3 IPAFrameContext.
This allows dropping the frame member, which is now stored in the base
class.
As the frame member of the base FrameContext class is private, the check
that accesses it in IPAIPU3::processStatsBuffer() would fail to compile.
As it won't be relevant anymore with the upcoming switch to the FCQueue
class, drop it.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
IPA modules have access to incoming Request's controls list and need to
store them in the frame context at queueRequest() time. Pass the frame
context to the Algorithm::queueRequest() function.
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Pass the frame number of the current frame being processed.
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
Pass the current frame number, and the current FrameContext for calls to
prepare.
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
Provide a common FrameContext as a base for IPA modules to inherit from.
This will allow having a common set of parameters for every frame
context managed by the FCQueue implementation.
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
Introduce a common implementation in libipa to represent the queue of
frame contexts.
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Frame contexts will become the core component of IPA modules, always
available to functions of the algorithms. To indicate and prepare for
this, turn the frame context pointer passed to Algorithm::process() into
a reference.
The RkISP1 IPA module doesn't use frame contexts yet, so pass a dummy
context for now.
While at it, drop an unneeded [[maybe_unused]] from Agc::process() and
add a missing parameter documentation for the frameContext argument to
Awb::process().
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
Avoid copying the whole IPA context by passing a reference to the
Af::afIsOutOfFocus() function.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
Fix various issues in Doxygen comment blocks:
- \param requires an [in] or [out] tag
- \param must come before the body of the documetation
- Drop leftover \param for argument that has been removed
- Rename coarseSearchStep to kCoarseSearchStep
- White space and line wrap
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
Currently, if a Unicam timeout is signalled, the pipeline handler only raises
an error message. Update the error handling to put the pipeline handler in an
internal error state, disable all device streams, and return all outstanding
requests as cancelled. Any subsequent requests that come into the pipeline
handler will also be returned as cancelled.
Any further error handling (e.g. a reset with camera stop()/start()) is up to
the application to perform as it requires.
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Add an error state used internally in the Raspberry Pi pipeline handler.
Currently this state is never set, but will be in a subsequent commit when a
device timeout has been signalled.
Add a isRunning() helper to identify if the state machine is in a stopped/error
state.
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
meson.build files are indented with 4 spaces, not 2.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Add support to the capture script for properties that control the script
execution. Script properties are specified in the 'properties' section
before the actual list of controls specified in the 'frames' section.
Define a first 'loop' property that allows repeating the frame list
periodically. All the frame ids in the 'frames' section shall be smaller
than the loop control.
Modify the capture script example to show usage of the 'loop' property
and better document the frames list while at it.
Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Since commit bedef55d9500 ("libcamera: pub_key: Gracefully handle failures
to load public key") the build will fail if openssl is not found on the
host system.
Use the existing HAVE_IPA_PUBKEY define to avoid accessing pubKey_ which
does not exist when building without openssl.
Signed-off-by: Matthias Fend <matthias.fend@emfend.at>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
|
|
Vimc pipeline handler is enabled unconditionally if the meson config
option '-Dtest' is true. However, this is not true for the vimc IPA.
Hence, a meson configuration such as:
-Dpipelines=raspberrypi -Dipas=raspberrypi -Dtest=true
will include the vimc pipeline handler (in addition to raspberrypi)
but will skip the vimc IPA which can lead to failure of unit tests
that depends on vimc to execute.
One such unit test was identified as a result of this issue on
RaspberryPi:
ERROR IPAModule ipa_module.cpp:278 ipa_vimc.so: Failed to open IPA library: No such file or directory
test IPA module src/ipa/vimc/ipa_vimc.so is invalid
due to the non-existent ipa_vimc.so.
Fix this by including the vimc IPA unconditionally when the tests are
enabled, similar to how the vim pipeline-handler is included.
Fixes: 6e65d4225736 ("libcamera: Enable vimc pipeline handler when tests are enabled")
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
GST_VIDEO_TRANSFER_BT601 and GST_VIDEO_TRANSFER_BT2020_10 macros are
defined in GST Version 1.18.0.
Usage of these macros causes gstlibcamera compilation failure if
GST_VERSION < 1.18.0. These macros are used only if GST_VERSION >= 1.18.0.
Fix the following compilation error:
../src/gstreamer/gstlibcamera-utils.cpp:157:7: error: ‘GST_VIDEO_TRANSFER_BT601’ was not declared in this scope; did you mean ‘GST_VIDEO_TRANSFER_BT709’?
157 | case GST_VIDEO_TRANSFER_BT601:
| ^~~~~~~~~~~~~~~~~~~~~~~~
| GST_VIDEO_TRANSFER_BT709
../src/gstreamer/gstlibcamera-utils.cpp:159:7: error: ‘GST_VIDEO_TRANSFER_BT2020_10’ was not declared in this scope; did you mean ‘GST_VIDEO_TRANSFER_BT2020_12’?
159 | case GST_VIDEO_TRANSFER_BT2020_10:
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
| GST_VIDEO_TRANSFER_BT2020_12
Fixes: fc9783acc608 ("gstreamer: Provide colorimetry <> ColorSpace mappings")
Signed-off-by: Vedant Paranjape <vedantparanjape160201@gmail.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Tested-by: Rishikesh Donadkar <rishikeshdonadkar@gmail.com>
Reviewed-by: Rishikesh Donadkar <rishikeshdonadkar@gmail.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
|
|
The min/max/def ControlValue of a ControlInfo can take arbitrary types that
are different from each other and different from the ControlId type.
The serialiser serialises these ControlValue separately by their type but
does not store the type. The deserialiser assumes that ControlValue types
match the ControlId type. If this is not the case, deserialisation will try
to deserialise values of the wrong type.
Fix this by serialising each of the min/max/def ControlValue's ControlType
and storing it just before the serialised ControlValue.
Fixes: https://bugs.libcamera.org/show_bug.cgi?id=137
Signed-off-by: Christian Rauch <Rauch.Christian@gmx.de>
Tested-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Signed-off-by: Paul Elder <paul.elder@ideasonboard.com>
|
|
Commit e297673e7686 ("libcamera: v4l2_device: Adjust colorspace based on
pixel format") has introduced a warning when trying to convert a color
space from V4L2 to libcamera if the media bus code is unknown. This was
meant to catch unknown image formats, but turned out to be also
triggered for metadata formats.
Color spaces are not applicable to metadata formats, there should thus
be no warning. Fix it by skipping the color space translation and
returning std::nullopt directly if the kernel reports
V4L2_COLORSPACE_DEFAULT. This doesn't introduce any change in behaviour
other than getting rid of the warning, as the V4L2Device::toColorSpace()
function returns std::nullopt already in that case.
Fixes: e297673e7686 ("libcamera: v4l2_device: Adjust colorspace based on pixel format")
Reported-by: Naushir Patuck <naush@raspberrypi.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Tested-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
|
|
When switching to different camera we try to release the camera
previously used. But if that camera has been unplugged, then its
instance would have been destroyed. Accessing it leads to seg fault.
Fix by checking camera_ to see if it exists.
Bug: https://bugs.libcamera.org/show_bug.cgi?id=147
Signed-off-by: Utkarsh Tiwari <utkarsh02t@gmail.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Tested-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
A UVC device could expose only formats that are not supported by
libcamera. The pipeline handler doesn't currently consider this as an
error, and happily creates a camera. The camera won't be usable, and
worse, generateConfiguration() and validate() will crash as those
functions assume that at least one format is supported.
Fix this by failing match() if none of the formats exposed by the camera
are supported. Log an error message in that case to notify the user.
Bug: https://bugs.libcamera.org/show_bug.cgi?id=145
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Tested-by: Christian Rauch <Rauch.Christian@gmx.de>
|
|
Populate and cache the list of supported formats in
UVCCameraData::init(), to avoid repeating the operation every time
generateConfiguration() is called. Combine this with the search for
the largest size advertised by the camera to avoid iterating over the
formats twice in init().
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Tested-by: Christian Rauch <Rauch.Christian@gmx.de>
|
|
Move the camera ID generation to UVCCameraData, and cache the ID in that
class. This will be useful to access the ID in multiple locations, such
as when printing error messages.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Tested-by: Christian Rauch <Rauch.Christian@gmx.de>
|
|
When the V4L2Device fails to open, it is not clear what device
caused the failure. The Entity name is presented, but not the device
node.
Provide it.
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
|
|
If the CameraSensor fails to identify the VCM specified, it could still
be possible to continue to operate the sensor. Autofocus support will be
disabled, but this would be no different to operating a camera with a
fixed focus. While of course the fixed focus position may not be
suitable, it would provide a better user experience to be able to
continue to operate the camera, while still reporting that the lens is
disabled.
Bug: https://bugs.libcamera.org/show_bug.cgi?id=146
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
|
|
It can be helpful to know 'which' file failed to parse if there is a
failure.
Report it to the user in the error message.
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
|
|
Overriding the dependency enables libcamera to be used
as a meson subproject more easily.
Signed-off-by: Barnabás Pőcze <pobrn@protonmail.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Commit 251f0534b74b ("qcam: viewfinder_gl: Take color space into account
for YUV rendering") introduced maybe-uninitialized warnings with gcc 11
and 12 when compiling with -O3. Both compilers warn that
../../src/qcam/viewfinder_gl.cpp: In member function ‘void ViewFinderGL::selectColorSpace(const libcamera::ColorSpace&)’:
../../src/qcam/viewfinder_gl.cpp:392:21: error: ‘offset’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
391 | fragmentShaderDefines_.append(QString("#define YUV2RGB_Y_OFFSET %1")
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
392 | .arg(offset, 0, 'f', 1));
| ~~~~^~~~~~~~~~~~~~~~~~~
Additionally, gcc 12 warns that
../../src/qcam/viewfinder_gl.cpp: In member function ‘void ViewFinderGL::selectColorSpace(const libcamera::ColorSpace&)’:
../../src/qcam/viewfinder_gl.cpp:379:36: error: ‘yuv2rgb’ may be used uninitialized [-Werror=maybe-uninitialized]
379 | yuv2rgb[i] *= 255.0 / 219.0;
../../src/qcam/viewfinder_gl.cpp:330:31: note: ‘yuv2rgb’ declared here
330 | std::array<double, 9> yuv2rgb;
|
While this should never happen here, the compiler isn't necessarily
wrong, as C++17 allows initializing a scoped enum from an integer using
direct-list-initialization, even if the integer value doesn't match any
of the enumerators for the scoped enum ([1]). Whether this is valid or
borderline paranoia from gcc may be debatable, but in any case it can't
be classified as blatantly wrong. Fix the warnings by adding default
cases to the switch statements in ViewFinderGL::selectColorSpace().
Which case is selected as the default doesn't matter, as this is not
meant to happen.
[1] https://en.cppreference.com/w/cpp/language/enum#enum_relaxed_init_cpp17
Bug: https://bugs.libcamera.org/show_bug.cgi?id=143
Fixes: 251f0534b74b ("qcam: viewfinder_gl: Take color space into account for YUV rendering")
Signed-off-by: Marco Felsch <m.felsch@pengutronix.de>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Rewrote commit message, added a default case for the encoding switch.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
|
|
The ycbcrEncodingToV4l2 map is missing the YCbCrEncoding::None encoding,
which results in a failure of V4L2Device::fromColorSpace() to convert
color spaces from libcamera to V4L2 for RGB formats. Fix it by adding
the missing encoding. As V4L2 has no such encoding, use
V4L2_YCBCR_ENC_DEFAULT as the value doesn't matter.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
|
|
Currently to request a frame, we operate the camera directly.
This approach is also scattered in two places,
MainWindow::startCapture() and MainWindow::queueRequest().
This makes it difficult to account for requests.
Centralize all the queuing to a single function queueRequest()
Rename the current queueRequest() to renderComplete().
This makes more sense as this slot is triggered when
the render is complete and we want to queue another
request.
Signed-off-by: Utkarsh Tiwari <utkarsh02t@gmail.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The camera selection dialog currently only displays the camera Id.
Display the camera location and camera model if available.
Signed-off-by: Utkarsh Tiwari <utkarsh02t@gmail.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Replace the cameraCombo_ on the toolbar with a QPushButton which
displays the CameraSelectorDialog. This would allow the user to view
information about the camera when switching.
The QPushButton text is set to the camera Id currently in use.
Signed-off-by: Utkarsh Tiwari <utkarsh02t@gmail.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Currently if there is HotPlug event when the user is on the Camera
selection dialog, the QComboBox doesn't update to reflect the change.
Add support for hotplugging / unplugging cameras.
Signed-off-by: Utkarsh Tiwari <utkarsh02t@gmail.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Currently we use QInputDialog convenience dialogs to allow the user to
select a camera. This doesn't allow adding of more information (such as
camera location, model etc).
Create a QDialog with a QFormLayout that shows a QComboBox with camera
Ids. Use a QDialogButtonBox to provide buttons for accepting and
cancelling the action.
The CameraSelectorDialog is only initialized the first time when the
MainWindow is created.
From this commit we cease to auto select the camera if only a single
camera is available to libcamera. We would always display the selection
dialog with the exception being that being if the camera is supplied on
the command line.
Signed-off-by: Utkarsh Tiwari <utkarsh02t@gmail.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|