Age | Commit message (Collapse) | Author |
|
The fromEntityName() function returns a pointer to a newly allocated
V4L2Device instance, which must be deleted by the caller. This opens the
door to memory leaks. Return a unique pointer instead, which conveys the
API semantics better than a sentence in the documentation.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
The fromEntityName() function returns a pointer to a newly allocated
V4L2Subdevice instance, which must be deleted by the caller. This opens
the door to memory leaks. Return a unique pointer instead, which conveys
the API semantics better than a sentence in the documentation.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
Forward any controls passed into the pipeline handler to the IPA.
The IPA then sets up the Raspberry Pi controller with these settings
appropriately, and passes back any V4L2 sensor controls that need
to be applied.
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Tested-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
This change allows controls passed into PipelineHandler::start to be
forwarded onto IPAInterface::start(). We also add a return channel if the
pipeline handler must action any of these controls, e.g. setting the
analogue gain or shutter speed in the sensor device.
The IPA interface wrapper isn't addressed as it will soon be replaced by
a new mechanism to handle IPC.
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Tested-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Applications now have the ability to pass in controls that need to be
applied on startup, rather than doing it through Request where there might
be some frames of delay in getting the controls applied.
This commit adds the ability to pass in a set of libcamera controls into
the pipeline handlers through the pipeline_handler::start() method. These
controls are provided by the application through the camera::start()
public API.
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Tested-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
If the IPA fails during configuration, return an error flag to the
pipeline handler and fail the use case gracefully.
At present, the IPA configuration can fail for the following reasons:
- The sensor is not recognised, and fails to open a CamHelper object.
- The pipeline handler did not pass in controls for the ISP and sensor.
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The Android camera device HAL3 specification does not require a
camera to go through any explicit close() call between configurations.
It is legitimate for a camera to be configured, a number of requests
processed and then re-configured again without any explicit stop.
The libcamera Android camera HAL starts the Camera at the first handled
request, and only stops it at camera close time. This means that two
camera configuration attempts in the same streaming session are only
interleaved by capture requests handling.
The libcamera::Camera state machine requires the Camera to be stopped
before any configuration take place, and this currently doesn't happen.
Fix this by stopping the camera and the associated worker thread if
a configuration attempt is performed while the Camera is in running
state.
This patch fixes cros_camera_test:
Camera3PreviewTest/Camera3SinglePreviewTest.Camera3BasicPreviewTest/0
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@ragnatech.se>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
Initialize pixel array properties in the Android camera HAL
inspecting the camera properties.
If the camera does not provide any suitable property, not static
metadata is registered to the Android framework.
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
Initialize pixel array properties 'PixelArraySize' and
'PixelArrayActiveAreas' by inspecting the V4L2 CROP_BOUNDS and
CROP_DEFAULT selection targets.
The properties are registered only if the sensor subdevice support
the above mentioned selection targets.
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@ragnatech.se>
Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
Break out initialization to its own function in preparation to
add more properties.
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@ragnatech.se>
Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
The CameraSensorInfo::analogCrop top-left corner is defined relatively
to the sensor active area.
The analogCrop rectangle is constucted by retrieving the V4L2
selection target V4L2_SEL_TGT_CROP which is instead defined relatively
to the whole sensor's pixel array size.
Adjust the the analogCrop rectangle subtracting from its top-left corner
the active area distance from the full pixel array.
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@ragnatech.se>
Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
Make sure the 'camera3_capture_request_t *' provided to
CameraDevice::processCaptureRequest() is valid before attempting to
access it.
This patch fixes cros_camera_test:
Camera3FrameTest/Camera3InvalidRequestTest.NullOrUnconfiguredRequest/*
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
The CameraDevice::getStaticMetadata() function populates the
entries for Android's static metadata by walking the ControlInfo
supported values reported by the libcamera pipeline.
The number of entries to be passed to Android is computed using the
vector's size which is initialized at vector creation time to the
maximum number of available entries.
In order to report the correct number of metadata do not create the
vector with the largest possible number of elements but only reserve
space for them using std::vector::reserve() which does not modify the
vector's size.
This patch fixes cros_camera_test:
Camera3DeviceTest/Camera3DeviceDefaultSettings.ConstructDefaultSettings/1
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
The exposure times in the exposure modes were causing AGC oscillations
because the algorithm was demanding long unachievable exposure times
but, without working sensor metadata, thought it was getting them when
actually it was not. We fix it by making the exposure profile request
only achievable exposure times, as we do for the ov5647 tuning.
Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Setting these controls "fixes" them and the AE may not change them;
setting them back to zero returns them to the control of the AE
algorithm.
Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
AE/AGC "disabled" is now handled better by the algorithm for itself,
so it no longer needs to be "resumed" before setting fixed shutter or
gain values.
Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
AGC, when paused, sets the last exposure/gain it wrote to be its
"fixed" values and will therefore continue to return them. When
resumed, we clear them so that both will float again.
This approach is better because AGC can be paused and we can
subsequently change (for example) the exposure and the gain won't
float again.
Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
When both gain and shutter have been directly specified, do not filter
slowly towards those target values, but adopt them immediately. This
should match user expectations better.
Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The qcam log prints one message per frame, which is pretty verbose. This
feature is useful for debugging, but not necessarily as a default
option. Silence it by default, and add a -v/--verbose command line
parameter to make the log verbose.
While this could have been handled manually by checking a verbose flag
when printing the message, the feature is instead integrated with the Qt
log infrastructure to make it more flexible. Messages printed by
qDebug() are now silenced by default and controlled by the -v/--verbose
argument.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
The top file description comment was incorrectly copied from the cam
application. Fix it.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
Recent doxygen versions don't appreciate unquoted PROJECT_NUMBER values
that contain spaces. Fix this by quoting the string.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
Let's try not to mix draft controls and regular controls.
Draft controls are unstable by definition, and removing or adding them
should not impact the enumeration of stable controls.
Keep draft controls at the end of the control_ids.yaml file and
add a comment to make clear where the draft controls section begins.
Acked-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@ragnatech.se>
Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
The description text lists "stillraw" as a stream role option. This is
incorrect, it should be listed as "raw" instead.
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@ragnatech.se>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
With the recent change in the bayer/embedded buffer matching code,
a condition would make the bayer buffer be requeued back to the device,
even though it could potentially be queued for matching. This would
cause unnecessary frame drops as sync would be lost.
The fix is to ensure the bayer buffer only gets requeued if the embedded
data buffer queue is not empty, i.e. the buffer truly is orphaned.
Additionally, we do this test before deciding to flush any of the two
queues of all their buffers.
Fixes: 909882b (pipeline: raspberrypi: Rework bayer/embedded data buffer matching)
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Tested-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Tested-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Introduce a pipeline model that lists the operations applied by the
camera pipeline. This is a first step towards defining explicitly how
the camera processes images, and how the libcamera controls affect the
processing.
The initial list of operations is meant to be expanded, and possibly
refactored (a block diagram should also be considered to make this
easier to read). How the controls affect the pipeline is largely missing
at this stage, with only a short explanation of the digital zoom to show
how this is meant to be documented. More documentation will be added
over time.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Sebastian Fricke <sebastian.fricke.linux@gmail.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@ragnatech.se>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
A few grammatical errors remain in the Control class documentation.
Fix them.
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Previously we required that the sensor absolutely reaches the target
exposure, but this can fail if frame rates or analogue gains are
limited. Instead insist only that we get several frames with the same
exposure time, analogue gain and that the algorithm's target exposure
hasn't changed either.
Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Acked-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
partly saturated images
When parts of an image saturate then the image brightness no longer
increases linearly with increased exposure/gain. Having calculated a
linear gain value it's better then to try it, allowing for saturating
regions, and if necessary increase the gain some more. We repeat this
several times.
Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Acked-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
constructor
Use memset in the constructor for embedded structures, it is tidier
and initialises everything. We use the initialiser list for other
members.
Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Acked-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
SwitchMode
When an application has specified fixed exposure time and/or gain they
must be programmed into the sensor immediately, even before the sensor
has been started. For this to happen they must be written into the
image metadata when the SwitchMode method is invoked.
We also make the default exposure/gain, when nothing has been set,
customisable in the tuning file.
Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Acked-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The Awb class now implements a SwitchMode method which outputs its
AwbStatus for other algorithms to read, should they be interested.
Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Acked-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Introduce a function to fetch the AwbStatus (fetchAwbStatus), and call
it unconditionally at the top of Prepare so that both Prepare and
Process know thereafter that it's been done.
Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Acked-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Previously the calculation computed Y for each region before returning
the weighted average, which "baked in" the over-importance of small
statistics regions. The revised calculation will treat all pixels
equally when the region weights are the same, making it easier to
use. With the previous scheme, proper "average" metering was difficult
to implement.
Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Acked-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The method formerly known as divvyupExposure is given a more
understandable name.
Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Acked-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
On the libcamera/VC4 platform the AGC Prepare/Process methods, and any
changes to the AGC settings, run synchronously - so a number of
mutexes and copies are unnecessary and can be removed.
Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Acked-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Replace Raspberry Pi debug with libcamera debug.
Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Acked-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Crop rectangle was not being configured on the isp output pad nor in the
resizer input pad, causing an unecessary crop in the image and an
unecessary scaling by the resizer when streaming with a higher
resolution then the default 800x600.
Example:
cam -c 1 -C -s width=3280,height=2464
In the pipeline:
sensor->isp->resizer->dma_engine
isp output crop is set to 800x600, which limits the output format to
800x600, which is propagated to the resizer input format set to 800x600,
and the resizer output format is set to the desired end resolution
3280x2464.
Signed-off-by: Helen Koike <helen.koike@collabora.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@ragnatech.se>
Signed-off-by: Niklas Söderlund <niklas.soderlund@ragnatech.se>
|
|
The names have to match for the setting to work. Use the libcamera
terminology for consistency (even though it touches more files).
Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Support the 'cloudy' AWB mode which was left out when the AwbModeTable
was introduced.
Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
There is a condition that would cause the buffers to never be in sync
when we using only a single buffer in the bayer and embedded data streams.
This occurred because even though both streams would get flushed to resync,
one stream's only buffer was already queued in the device, and would end
up never matching.
Rework the buffer matching logic by combining updateQueue() and
tryFlushQueue() into a single function findMatchingBuffers(). This would
allow us to flush the queues at the same time as we match buffers, avoiding
the the above condition.
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Tested-by: David Plowman <david.plowman@raspberrypi.com>
Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The JPEG post-processor is marked as licensed under GPL-2.0-or-later.
This is an oversight and unvoluntary. License it under the
LGPL-2.1-or-later as the rest of the camera HAL implementation.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Acked-by: Umang Jain <email@uajain.com>
Acked-by: Hirokazu Honda <hiroh@chromium.org>
|
|
Instead of directly mmaping/unmapping buffers passed to the IPA, use
a MappedFrameBuffer. The latter is a cleaner interface, and avoid
code duplication.
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Use a MappedFrameBuffer to mmap embedded data buffers for the pipeline
handler to use in the cases where the sensor does not fill it in. This
avoids the need to mmap and unmap on every frame.
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
When configuring the converter, the format is first set on the output
side based on the format of the camera pipeline output, and then the
format is set on the capture side to match the desired stream
configuration. The format parameter passed to
V4L2VideoDevice::setFormat() uses the same variable for both calls,
which has the unwanted side effect of carrying plane configuration from
the output side to the capture side of the converter. In particular, the
stride or plane size requested on the capture side can become
unnecessarily large when converting to a format with a lower number of
bits per pixel (for instance converting YUYV to NV12).
Fix this by resetting the format variable before using it to configure
the capture side.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The request completion handler is invoked in the camera manager thread,
which shouldn't be blocked for large amounts of time. As writing the
frames to disk can be a time-consuming process, move request processing
to the main thread by queueing an event to the event loop.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@ragnatech.se>
|
|
Add a deferred cals queue to the EventLoop class to support queuing
calls from a different thread and processing them in the event loop's
thread.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@ragnatech.se>
|
|
There's no user of the EventDispatcher (and the related EventNotifier
and Timer classes) outside of libcamera. Move those classes to the
internal API.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@ragnatech.se>
|
|
To prepare for removal of the EventDispatcher from the libcamera public
API, switch to libevent to handle the event loop.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@ragnatech.se>
|
|
Get the event dispatcher from the current thread instead of the camera
manager. This prepares for the removal of
CameraManager::eventDispatcher().
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@ragnatech.se>
|
|
To enable reusing Request objects, we kept a pool of free Requests. This
pool was not cleared upon stopping capture, however, which caused a
segfault when switching to another camera. Fix this by clearing the
Request pool on stopCapture().
Fixes: c753223ad6b9 ("libcamera, android, cam, gstreamer, qcam, v4l2: Reuse Request")
Signed-off-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@ragnatech.se>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Tested-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Tested-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
|