Age | Commit message (Collapse) | Author |
|
Let the algorithm perform its initial configuration. Implement
configure() to set a default gamma value and let process do the updates
needed.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The tone mapping algorithm calculates the gamma curve for every frame,
regardless of whether the gamma value has changed or not. This issue is
exasperated as we currently hardcode the gamma to a single value.
Optimise the implementation to only recalculate the look up table when
the gamma setting is changed, and store the gamma setting of the LUT
curve as part of the IPA context.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The AGC class was not documented while developing. Extend that to
reference the origins of the implementation, and improve the
descriptions on how the algorithm operates internally.
While at it, rename the functions which have bad names.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Now that we moved the diagram into the AWB class documentation, reword the
accumulator documentation to make it clear it is not meant to be used
only in AWB.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The AWB algorithm is based on the Grey world algorithm and uses the
statistics generated by the ImgU for that. Explain how it uses those,
and reference the original algorithm at the same time.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The stats pointer is marked as [[maybe_unused]]. This is a leftover from
a previous commit which was here to keep the compatibility while
transitioning to the new iterative algorithms.
Remove this attribute to make it explicit that stats are really used to
feed the algorithms.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
|
|
Clarify the roles and interactions between the pipeline handler events
and the algorithm calls by documenting all the remaining functions of
the class.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
|
|
Further extend the documentation for the IPAIPU3::configure operation.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
|
|
The IPU3 IPA is maturing to a modular and extensible system capable of
handling specific algorithms for the processing blocks on the ImgU.
Provide a top-level class documentation to provide an overview of the
IPA, detailing what events are used and what algorithms are currently
supported, as well as the limitations currently imposed.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
|
|
Introduce a dedicated worker class derived from libcamera::Thread.
The worker class maintains a queue for post-processing requests
and waits for a post-processing request to become available.
It will process them as per FIFO before de-queuing it from the
queue.
The entire post-processing handling iteration is locked under
streamsProcessMutex_ which helps us to queue all the post-processing
request at once, before any of the post-processing completion slot
(streamProcessingComplete()) is allowed to run for post-processing
requests completing in parallel. This helps us to manage both
synchronous and asynchronous errors encountered during the entire
post processing operation. Since a post-processing operation can
even complete after CameraDevice::requestComplete() has returned,
we need to check and complete the descriptor from
streamProcessingComplete() running in the PostProcessorWorker's
thread.
This patch also implements a flush() for the PostProcessorWorker
class which is responsible to purge post-processing requests
queued up while a camera is stopping/flushing. It is hooked with
CameraStream::flush(), which isn't used currently but will be
used when we handle flush/stop scenarios in greater detail
subsequently (in a different patchset).
The libcamera request completion handler CameraDevice::requestComplete()
assumes that the request that has just completed is at the front of the
queue. Now that the post-processor runs asynchronously, this isn't true
anymore, a request being post-processed will stay in the queue and a new
libcamera request may complete. Remove that assumption, and use the
request cookie to obtain the Camera3RequestDescriptor.
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Hirokazu Honda <hiroh@chromium.org>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
PostProcessor::process() is invoked by CameraStream class
in case any post-processing is required for the camera stream.
The failure or success is checked via the value returned by
CameraStream::process().
Now that the post-processor notifies about the post-processing
completion operation, we can drop the return value of
PostProcessor::process(). The status of post-processing is passed
to CameraDevice::streamProcessingComplete() by the
PostProcessor::processComplete signal's slot.
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Hirokazu Honda <hiroh@chromium.org>
|
|
Notify that the post processing for a request has been completed,
via a signal. The signal is emitted with a context pointer along with
status of the buffer. The function CameraDevice::streamProcessingComplete()
will finally set the status on the request descriptor and complete the
descriptor if all the streams requiring post processing are completed.
If buffer status obtained is in error state, notify the status to the
framework and set the overall error status on the descriptor via
setBufferStatus().
We need to track the number of streams requiring post-processing
per Camera3RequestDescriptor (i.e. per capture request). Introduce
a std::map to track the post-processing of streams. The nodes
are dropped from the map when a particular stream post processing
is completed (or on error paths). A std::map is selected for tracking
post-processing requests, since we will move post-processing to be
asynchronous in subsequent commits. A vector or queue will not be
suitable as the sequential order of post-processing completion
of various requests won't be guaranteed then.
A streamsProcessMutex_ has been introduced here as well, which will be
applicable to guard access to descriptor's pendingStreamsToProcess_ when
post-processing is moved to be asynchronous in subsequent commits.
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Hirokazu Honda <hiroh@chromium.org>
|
|
Save and provide the context for post-processor of a camera stream
via Camera3RequestDescriptor::StreamBuffer. We extend the structure
to include source and destination buffers for the post processor, along
with CameraStream::Type::Internal buffer pointer (if any). In addition
to that, a back pointer to Camera3RequestDescriptor is convenient to
get access to overall descriptor (status, metadata settings etc.).
Also, migrate CameraStream::process() and PostProcessor::process()
signature to use Camera3RequestDescriptor::StreamBuffer only. This
will be helpful when we move to async post-processing in subsequent
commits.
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Hirokazu Honda <hiroh@chromium.org>
|
|
Currently, we use Camera3RequestDescriptor::Status to determine:
- When the descriptor has been completely processed by HAL
- Whether any errors were encountered, during its processing
Both of these are essential to know whether the descriptor is eligible
to call process_capture_results() through sendCaptureResults().
When a status(Success/Error) is set on the descriptor, it is ready to
be sent back via sendCaptureResults(). However, this might lead to
undesired results especially when sendCaptureResults() runs in a
different thread (for e.g. stream's post-processor async completion
slot).
This patch decouples the descriptor status (Success/Error) from the
descriptor's completion status (pending or complete). The advantage
of this is we can set the completion status when the descriptor has
been processed fully by the layer and we can set the error status on
the descriptor wherever an error is encountered, throughout the
lifetime of the descriptor in the HAL layer.
While at it, introduce a wrapper completeDescriptor() around
sendCaptureResults(). completeDescriptor() as the name suggests will
mark the descriptor as complete, so it is ready to be sent back.
The locking mechanism is moved from sendCaptureResults() to this wrapper
since the intention is to use completeDescriptor() in place of existing
sendCaptureResults() calls.
Also make sure the sequence of abortRequest() call happens in the same
order at all places i.e. after its added to the descriptors_ queue. Fix
one of the abortRequest() call accordingly.
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Hirokazu Honda <hiroh@chromium.org>
|
|
Instead of simply returning if encoder_ is nullptr, fail hard
via an assertion. It is quite unlikely that encoder_ could only
be null as a result of a fatal bug in the code, so be loud about
the failure.
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Hirokazu Honda <hiroh@chromium.org>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Instead of checking postProcessor for nullptr, replace this
check with an assertion that checks if the camera stream's
type is not Type::Direct. Since it makes no sense to call
CameraStream::process() on a Type::Direct camera stream.
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Hirokazu Honda <hiroh@chromium.org>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Each Request is currently creating its own CameraControlValidator
using the Camera instance at construction.
Now that the Camera exposes its own CameraControlValidator on its
private interface, use that one on all Requests.
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Create a Camera-specific CameraControlValidator for the Camera instance.
This will allow requests to use a single validator instance without
having to construct their own.
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The ternary operation used to get the total bytesused of a V4L2 single
planar format which is stored in a multiplanar buffer can easily be
mis-read to think it's a bug, and appears to be reading the value of the
first of N planes as the total.
Directly explain the reasoning for why it looks like the condition is
inverted, as it is correct that the total bytes used is stored in only
the first plane of the multiplanar buffer.
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Instead of using constants for the analogue gains limits, use the
minimum and maximum from the configured sensor.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
We currently control the exposure value by the shutter speed and the
analogue gain. We can't use the digital gain to have more than the
maximum exposure value calculated because we are not controlling it.
Remove unused code associated with this digital gain.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Simplify the reading by removing one level of indentation to return
early when the change is small between two calls.
Reword the LOG() message when we are correctly exposed, and move the
lastFrame_ variable to update it even if the change is small.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
We need to calculate the gain on the previous exposure value calculated.
Now that we initialise the exposure and gain values in configure(), we
know the initial exposure value, and we can set it before any loop is
running.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
We have mixed terms between gain, analogue gain and the exposure value
gain.
Make it clear when we are using the analogue gain from the sensor, and
when we are using the calculated gain to be applied to the exposure
value to reach the target.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Until now, the algorithm makes complex assumptions when dividing the
exposure and analogue gain values. Instead, use a simpler clamping of
the shutter speed first, and then of the analogue gain, based on the
limits configured.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
We are filtering the exposure value to limit the gain to apply, but we
are not using the result.
Fix it.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The gains are currently set as a uint32_t while the analogue gain is
passed as a double. We also have a default maximum analogue gain of 15
which is quite high for a number of sensors.
Use a maximum value of 8 which should really be configured by the IPA
and not fixed as it is now. While at it make it a double.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
We are using arbitrary constants for the exposure limit in a number of
lines.
Instead of using static constants for those, use the limits of the
sensor passed in IPASessionConfiguration and cache those.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The exposure value is filtered in filterExposure() using the
currentExposure_ and setting a prevExposure_ variable. This is misnamed
as it is not the previous exposure, but a filtered value.
Rename it accordingly.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
When zones are used for the grey world algorithm, they are only
considered if their average green value is at least 32/255 to exclude
zones that are too dark and don't provide relevant colour information
(on the opposite side of the spectrum, saturated regions are excluded by
the ImgU statistics engine).
The algorithm requires a minimal number of zones that meet this criteria
in order to run. Now that we correct the black level, the 32/255 minimal
value is a bit high and prevents the algorithm for running in low-light
conditions. Lower the value to 16/255 to fix it.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Acked-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The AWB grey world algorithm tries to find a grey value and it can't do
it on over-exposed images. To exclude those, the saturation ratio is
used for each cell, and the cell is included only if this ratio is 0.
Now that we have changed the threshold, more cells may be considered as
partially saturated and excluded, preventing the algorithm from running
efficiently.
Change that behaviour, and consider 90% as a good enough ratio.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The AGC frame context needs to be initialised correctly for the first
iteration. Until now, the IPA uses the minimum exposure and gain values
and caches those in local variables.
In order to give the sensor limits to AGC, create a new structure in
IPASessionConfiguration. Store the exposure in time (and not line
duration) and the analogue gain after CameraSensorHelper conversion.
Set the gain and exposure appropriately to the current values known to
the IPA and remove the setting of exposure and gain in IPAIPU3 as those
are now fully controlled by IPU3Agc.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
We can have a saturation ratio per cell, giving the percentage of pixels
over a threshold within a cell where 100% is set to 0xff.
The parameter structure 'ipu3_uapi_awb_config_s' contains four fields to
set the threshold, one per channel.
The blue field is also used to configure the ImgU and make it calculate
the saturation ratio or not.
Set a green value saturated when it is more than 230 (90% of the maximum
value 255, coded as 8191). As this is the only channel used for AGC,
there is no need to apply it to the other ones.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
camera_device.cpp doesn't use the PostProcessor class, the
post_processor.h header shouldn't be included. Removing it causes a
compilation failure as the CameraBuffer class is not defined anymore,
include camera_buffer.h instead.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Hirokazu Honda <hiroh@chromium.org>
|
|
Rename CameraMetadata::get() to CameraMetadata::getMetadata()
to avoid confusion with std::unique_ptr::get() when CameraMetadata
is used with a std::unique_ptr.
No functional changes intended in this patch.
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
|
|
There's no need for the move constructor and the destructor to be
inline. Define them explicitly, with default implementations. This
allows usage of the CameraStream class without a complete definition of
the PostProcessor class.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
|
|
The camera HAL APIs requires that any acquire fence that hasn't been
waited on to be sent back to the framework as a release fence. The
CameraDevice already copies the acquire fence to the release fence when
signaling request completion, but the CameraStream incorrectly closes
the fence when a wait fails and sets it to -1. Fix it.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
|
|
The camera3_stream_buffer_t structure is meant to communicate between
the camera service and the HAL. They are short-live structures that
don't outlive the .process_capture_request() operation (when queuing
requests) or the .process_capture_result() callback.
We currently store copies of the camera3_stream_buffer_t passed to
.process_capture_request() in Camera3RequestDescriptor::StreamBuffer to
store the structure members that the HAL need, and reuse them when
calling the .process_capture_result() callback. This is conceptually not
right, as the camera3_stream_buffer_t pass to the callback are not the
same objects as the ones received in .process_capture_request().
Store individual fields of the camera3_stream_buffer_t in StreamBuffer
instead of copying the whole structure. This gives the HAL full control
of how data is stored, and properly decouples request queueing from
result reporting.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
Call abortRequest() in CameraDevice::requestComplete() instead of
open-coding it.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
|
|
The camera3_stream_t instances are used to interact with the camera
service, whose API uses non-const pointers. Replace the const reference
returned by CameraStream::camera3Stream() with a non-const pointer. It
turns out that nobody calls this function, but new users will be
introduced in subsequent commits.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Hirokazu Honda <hiroh@chromium.org>
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
|
|
Now that we have a proper structure to model a stream buffer, pass it to
CameraStream::process() instead of the camera3_stream_buffer_t. This
will allow accessing other members of StreamBuffer in subsequent
commits.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Hirokazu Honda <hiroh@chromium.org>
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
|
|
The Camera3RequestDescriptor structure stores, for each stream, the
camera3_stream_buffer_t and the libcamera FrameBuffer in two separate
vectors. This complicates buffer handling, as the code needs to keep
both vectors in sync. Create a new structure to group all data about
per-stream buffers to simplify this.
As a side effect, we need to create a local vector of
camera3_stream_buffer_t in CameraDevice::sendCaptureResults() as the
camera3_stream_buffer_t instances stored in the new structure in
Camera3RequestDescriptor are not contiguous anymore. This is a small
price to pay for easier handling of buffers, and will be refactored in
subsequent commits anyway.
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Hirokazu Honda <hiroh@chromium.org>
|
|
Data (or broader context) required for post processing of a camera request
is saved via Camera3RequestDescriptor. Instead of passing individual
arguments to CameraStream::process(), pass the Camera3RequestDescriptor
pointer to it. All the arguments necessary to run the post-processor can
be accessed from the descriptor.
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Hirokazu Honda <hiroh@chromium.org>
|
|
The camera3_capture_result_t is only needed to convey capture results to
the camera service through the process_capture_result() callback.
There's no need to store it in the Camera3RequestDescriptor. Build it
dynamically in CameraDevice::sendCaptureResults() instead.
This requires storing the result metadata created in
CameraDevice::requestComplete() in the Camera3RequestDescriptor. A side
effect of this change is that the request metadata lifetime will match
the Camera3RequestDescriptor instead of being destroyed at the end of
requestComplete(). This will be needed to support asynchronous
post-processing, where the request completion will be signaled to the
camera service asynchronously from requestComplete().
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Hirokazu Honda <hiroh@chromium.org>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
The Camera3RequestDescriptor structure is growing into an object with
member functions. Turn it into a class, uninline the destructor to
reduce code size, explicitly disable copy as requests are not copyable,
and delete the default constructor to force all instances to be fully
constructed.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Hirokazu Honda <hiroh@chromium.org>
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
|
|
Camera3RequestDescriptor is a utility structure that groups information
about a capture request. It can be and will be extended to preserve the
context of a capture overall. Since the context of a capture needs to
be shared among other classes (for e.g. CameraStream) having a private
definition of the struct in CameraDevice class doesn't help.
Hence, de-scope the structure so that it can be shared with other
components (through references or pointers). Splitting the structure to
a separate file will help avoiding circular dependencies when using it
through the HAL implementation.
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Hirokazu Honda <hiroh@chromium.org>
|
|
When distributions build and package libcamera libraries, they may not
necessarily run the build in the upstream source tree. In these cases, the git
SHA1 versioning information will be lost.
This change addresses that problem by requiring package managers to run
'meson dist' to create a tarball of the source files and build from there.
On runing 'meson dist', the utils/run-dist.sh script will create a
.tarball-version file in the release tarball with the version string generated
from the existing utils/gen-version.sh script.
The utils/gen-version.sh script has been updated to check for the presence of
this .tarball-version file and read the version string from it instead of
creating one.
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The gen-version.sh script expects to be called from a git repo, and sets its
src_root variable accordingly. This may not always be the case if it is built
from a tarball source - full support for which is in a future commit.
The MESON_SOURCE_ROOT environnement variable does not get set when called from
the meson vcs_tag() function, but does when called from the run_command()
function, so that cannot be used either.
Instead, explicitly pass the meson source root to the gen-version.sh script.
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Branches named integration/* are candidates for merge in the master
branch. Subject them to the same checks in the pre-push git hook.
An important difference between integration branches and the master
branch is that the former are typically created as new branches. We
can't check the whole history as there are known bad commits, so we have
to identify the base commit for the integration branch. The best
approximation is to use the remote master branch for the tree being
pushed to.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
libcamera now has the ability to use libdw and libunwind to generate
backtraces, in addition to the glibc backtrace() function. libdw
provides the most detailed output and is highly recommended, but is
limited to parsing backtraces, it doesn't support capturing them.
libunwind and backtrace() provide both features. If backtrace() is
available, libunwind will not bring any improvement.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
|