Age | Commit message (Collapse) | Author |
|
Add a new "Recurrent" stream flag. This flag indicates the stream buffer
handling/management happend from the pipeline handler exclusively. This
is used for TDN/Stitch and Config streams.
Add a new Needs32bitConv stream flag to indicate that this stream needs
a software postprocessing conversion run on it before returning out to
the application.
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Add a new RequiresMmap flag to the RPi::Stream class indicating that
buffers handled by the stream must be mmapped after allocation and
cached internally.
Add a new member function getBuffer(id) which can be used to obtain the
mapped buffers for a given buffer id.
Add a new member function acquireBuffer() which can be used to obtain
any mapped buffer that has not already been acquired by the caller.
As a drive-by, add the <algorithm> header to rpi_stream.cpp as it is
needed for the std::find_if() function.
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The python bindings layer has to parse the libcamera controls to ensure
that they are converted to suitable names for the python layer.
Part of this strips out common prefixes from control names, however the
SceneFlicker control would end up using an illegal name if processed in
the same way as the other controls.
The SceneFlicker control has now been removed as part of the
introduction of the AeFlickerMode and AeFlickerPeriod controls.
Remove the workaround in the python layer.
Fixes: 6fdbf3f38c31 ("libcamera: controls: Add controls for AEC/AGC flicker avoidance")
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
We provide access to the various fields of the new SensorConfiguration
class. The class also needs a constructor so that Python applications
can make one and put it into the CameraConfiguration.
Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Tomi Valkeinen <tomi.valkeinen@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
libcamera/internal/media_device.h includes linux/media.h already.
Signed-off-by: Andrey Konovalov <andrey.konovalov@linaro.org>
Suggested-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: Mattijs Korpershoek <mkorpershoek@baylibre.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The description of ConverterFactoryBase::registerType() referred to
a converter factory as "converter class" and "converter". Fix that.
Also make the descriptions of ConverterFactoryBase::compatibles() and
ConverterFactoryBase::create() a bit more specific.
Signed-off-by: Andrey Konovalov <andrey.konovalov@linaro.org>
Reviewed-by: Mattijs Korpershoek <mkorpershoek@baylibre.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
We avoid skipping the IPAs while frameCount_ is less than
dropFrameCount_, indicating that these frames will not be sent to the
application. This means that when these numbers are equal then this is
the first frame the application will get, so again, we must avoid
skipping the IPAs. Consequently the test here must avoid the case of
equality.
Fixes: 51533fecae8d ("ipa: rpi: Fix frame count logic when running algorithms")
Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
This commit simplifies the validate() and configure() calls in the
pipeline handler in a number of ways:
- Determine the V4L2DeviceFormat structure fields for all streams
in validate(), cache them and reuse in configure() instead of
re-generating this structure multiple times.
- Remove setting a default pixel format in validate(), this code patch
will not be used.
- Use the recently added updateStreamConfig() and toV4L2DeviceFormat()
helpers to populate fields in the V4L2DeviceFormat and StreamConfiguration
structures to reduce code duplication.
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Switch to XRGB8888 as a default Viewfinder role output format, this is a
more correct description of the ISP hardware output, and what is
accepted by the Raspberry Pi hardware.
Switch to YUV420 as a default output format for everything else, as this
format is best supported by encoding (e.g. H.264, JPEG) sinks on the
Raspberry Pi platform.
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
This commit simplifies the validate() and configure() calls in the
pipeline handler in a number of ways:
- Only pass the RPiCameraConfiguration structure into platformValidate()
and platformConfigure().
- Determine the V4L2DeviceFormat structure fields for all streams in
validate(), cache them and reuse in configure() instead of
re-generating this structure multiple times.
- Use the recently added updateStreamConfig() and toV4L2DeviceFormat()
helpers to populate fields in the V4L2DeviceFormat and
StreamConfiguration structures to reduce code duplication.
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Add a helper updateStreamConfig() that updates the format related fields
in a StreamConfiguration from a given V4L2DeviceFormat structure.
Add and override to the toV4L2DeviceFormat() helper that returns a
V4L2DeviceFormat structure populated from the format related fields in
a StreamConfiguration.
Both these helper functions will be used in a future commit to simplify
the Raspberry Pi pipeline handler configuration/validation code.
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Currently, the stream configuration is stored in two vectors, rawStreams
and outStreams for convenience. However, these vectors are constructed
in both platformValidate() and platformConfigure().
This change caches these vectors in the RPiCameraConfiguration class to
construct them only once in platformValidate().
Pass a pointer to the current configuration to platformValidate() and
platformConfigure() so that they can access the streams vectors.
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Move the existing isRaw()/isYuv()/isRgb()into a static function of
PipelineHandlerBase. This will allow them to be shared with the
pipeline handler derived class.
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The closing line of a comment block was aligned with spaces and not
tabs. Fix it.
Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Propagate any changes to the format stride done by platformValidate().
The stride value may be adjusted for performace reasons.
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Handle the SensorConfiguration provided by the application in the
pipeline validate() and configure() call chains.
During validation, first make sure SensorConfiguration is valid, then
handle it to compute the sensor format.
For the VC4 platform where the RAW stream follows the sensor's
configuration adjust the RAW stream configuration to match the sensor
configuration.
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Add a class function to the CameraSensor class to apply a full
configuration to the sensor.
The configuration shall be fully populated and shall apply without
modifications to the sensor.
Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Introduce SensorConfiguration in the libcamera API.
The SensorConfiguration is part of the CameraConfiguration class
and allows applications to control the sensor settings.
Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The IPA stores a list of the last 10 frame lengths applied to the
sensor for determining the timeout to use. This list gets reset on
start(), but there is a path through the code that accesses this list
in configure() which happens earlier, causing a logical error.
Fix this by constructing the list with 10 initial values of 0s.
Bug: https://github.com/raspberrypi/libcamera/issues/64
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The frame counter test to determine if we run the IPA algorithms has a
logic bug where it treats dropFrameCount_ and mistrustCount_ as frame
numbers, not counts of frames (which it is). The implication is that
startup convergence and initial settings take one extra frame to apply.
Fix this.
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Tested-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
This format was defined with the V4L2_PIX_FMT_YUV444M fourcc instead of
the correct V4L2_PIX_FMT_YVU444M fourcc.
Fixes: 3b9fe4ae996b ("libcamera: formats: Add YUV444 and YVU444 pixel formats")
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The Android CameraDevice class adds a sourceStream for each Mapped
stream requested by the framework.
When mapping multiple framework streams to the same sourceStream, the
implementation of CameraDevice::processCaptureRequest wrongly erases the
just added sourceStream from the list of streams to request to
libcamera.
Fix this by adding the stream instead of erasing it.
Fixes: 7ea83eba0df6 ("android: camera_device: Postpone mapped streams handling")
Signed-off-by: Harvey Yang <chenghaoyang@chromium.org>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
|
|
If the json file parsing failed due to a malformed file, the root
pointer would be null. This was not tested and caused a segfault when
trying to use the pointer to retrieve the version key.
Fix this by bailing out early if the parser returns a null pointer.
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
|
|
Whenever we run Agc::process(), we store the most recent total
exposure requested for each channel.
With these values we can apply the channel constraints after
time-filtering the requested total exposure, but before working out
how much digital gain is needed.
Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
|
|
A channel constraint is somewhat similar to the upper/lower bound
constraints that we use elsewhere, but these constraints apply between
multiple AGC channels. For example, it lets you say things like "don't
let the channel 1 total exposure be more than 8x that of channel 0",
and so on. By using both an upper and lower bound constraint, you
could fix one AGC channel always to be a fixed ratio of another.
Also read a vector of them (if present) when loading the tuning file.
Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
|
|
The switchMode, prepare and process methods are updated to implement
multi-channel AGC correctly:
* switchMode now invokes switchMode on all the channels (whether
active or not).
* prepare must find what channel the current frame is, and run on
behalf of that channel.
* process updates the most recent DeviceStatus and statistics for the
channel of the frame that has just arrived, but generates updated
values working through the active channels in round-robin fashion.
One minor detail in process is that we don't want to change the
DeviceStatus metadata of the current frame, so we now pass this to the
AgcChannel's process method, rather than letting it find the
DeviceStatus in the metadata.
Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
|
|
This commit does the basic reorganisation of the code in order to
implement multi-channel AGC. The main changes are:
* The previous Agc class (in agc.cpp) has become the AgcChannel class
in (agc_channel.cpp).
* A new Agc class is introduced which is a wrapper round a number of
AgcChannels.
* The basic plumbing from ipa_base.cpp to Agc is updated to include a
channel number. All the existing controls are hardwired to talk
directly to channel 0.
There are a couple of limitations which we expect to apply to
multi-channel AGC. We're not allowing different frame durations to be
applied to the channels, nor are we allowing separate metering
modes. To be fair, supporting these things is not impossible, but
there are reasons why it may be tricky so they remain "TBD" for now.
This patch only includes the basic reorganisation and plumbing. It
does not yet update the important methods (switchMode, prepare and
process) to implement multi-channel AGC properly. This will appear in
a subsequent commit. For now, these functions are hard-coded just to
use channel 0, thereby preserving the existing behaviour.
Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
|
|
Add a new helper function Histogram::interBinMean() that essentially
replaces the existing Histogram::interQuantileMean() logic but working on
bins instead.
Rework the interQuantileMean() to call into interBinMean() with the
appropriate convertion from quatiles to bins.
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
|
|
StatisticsPtr is a shared pointer, so the use of std::make_unique to
create it was a bit confusing. Use std::make_shared instead.
Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The Agc::process() function returns an AgcStatus object in the
metadata as before, but Agc::prepare() is changed to return the values
it computes in a separate AgcPrepareStatus object (under the new tag
"agc.prepare_status").
The "digitalGain" and "locked" fields are moved from AgcStatus to
AgcPrepareStatus.
This will be useful going forward as we can be more flexible about the
order in which prepare() and process() are called, without them
trampling on each other's results.
Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
We now time-filter the exposure before sorting out how much digital
gain is required. This is actually a little more natural and
simplifies the code. It also prepares us for some future work where
this arrangement will be helpful.
Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrpyi.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
prepare() doesn't use the AWB status, so fetching it in process() is
probably better. This change is preparatory to other changes, where we
may find ourselves calling process() without having called prepare()
previously.
Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Replace the buffer id generation in RPi::Stream with a simple integer
counter since ids don't get recycled any more.
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Since we don't distinguish between externally and internally allocated
dma bufs, rename this function to setExportedBuffer() to clearer on its
function.
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
There is no need to distinguish between dma bufs allocated outside of
libcamera and internally allocated buffers. As such, remove all the
special case handling of such buffers.
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Hardcode the maximum number of buffers imported to the V4L2 video device
to 32. This only has a minor disadvantage of over-allocating cache slots
and V4L2 buffer indexes, but does allow more headroom for using dma
buffers allocated from outside of libcamera.
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
For compressed formats, v4l2_pix_format.bytesperline value will be zero
and is documented similarly in the kernel. Since we set the stride to
v4l2_pix_format.bytesperline, document the case where it is expected
to be zero (i.e. if the format is compressed).
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Nicolas Dufresne <nicolas.dufresne@collabora.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The imx290 produces a single unusable frame on startup and mode switch.
This is signalled to the IPA in the mode switch case, but not the
startup case. Fix this.
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Don't make an unnecessary call to toV4L2DeviceFormat() from validate()
to get a V4L2DeviceFormat. Instead, the conversion can happen directly
from the RAW stream PixelFormat.
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
|
|
Increase the maximum list size to 2000 elements. This allows, for
example, larger lens shading config structures to be parsed correctly
without throwing any errors.
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The FocusFom metadata was no longer being reported back because the
"focus.status" metadata was never being created.
Additionally, the scaling of the focus FoMs was over-zealous, rounding
just about everything down to zero.
Fixes: ac7511dc4c59 ("ipa: raspberrypi: Generalise the focus reporting code")
Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Altered the color matrices for the tuning files for various
cameras in order to make them more color accurate.
Signed-off-by: Ben Benson <ben.benson@raspberrypi.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
|
|
We handle the flicker modes by passing the correct period to the
AEC/AGC algorithm which already contains the necessary code.
The "Auto" mode, as well as reporting the detected flicker period via
the "AeFlickerDetected" metadata, are unsupported for now.
Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
|
|
Flicker is the term used to describe brightness banding or oscillation
of images caused typically by artificial lighting driven by a 50 or
60Hz mains supply. We add three controls intended to be used by
AEC/AGC algorithms:
AeFlickerMode to enable flicker avoidance.
AeFlickerPeriod to set the flicker period "manually".
AeFlickerDetected to report any flicker that is currently detected.
Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
|
|
The format to be applied on the sensor is selected by two criteria: the
desired output size and the bit depth. As the selection depends on the
presence of a RAW stream and the streams configuration is handled in
validate() there is no need to re-compute the format in configure().
Centralize the computation of the sensor format in validate() and remove
it from configure().
Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
|
|
The findBestFormat() helper operates on the list of sensor formats,
which is owned by the CameraData class. Move the function to that class
as well to:
1) Avoid passing the list of formats to the function
2) Remove a static helper in favour of a class function
3) Allow subclasses with access to CameraData to call the function
Move to the CameraData class the scoreFormat helper as well.
Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
|
|
populateSensorFormats() is a static helper that is called from a single
place and performs a simple loop over the sensor camera formats.
Remove it and in-line it in the caller to remove one static helper from
the pipeline_base.cpp file.
Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The CameraManager::get(dev_t) implementation was provided only for the
V4L2 Adaptation layer. This has now been replaced with the use of the
public SystemDevices property.
Remove the deprecated function entirely, along with the camerasByDevnum_
map which was only used to support this functionality.
This is a clear (and intentional) breakage in both the API and ABI.
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The CameraManager->get(dev_t) helper was implemented only to support the
V4L2 Adaptation layer, and has been deprecated now that a new camera
property - SystemDevices has been introduced.
Rework the implementation of getCameraIndex() to use the SystemDevices
property and remove reliance on the now deprecated call.
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Handling ioctl's within applications is often wrapped with a helper such
as xioctl. Unfortunately, there are many instances of xioctl which
incorrectly handle the correct size of the ioctl request.
This leads to incorrect sign-extension of ioctl's which have bit-31 set,
and can cause values to be passed into the libcamera's v4l2 adaptation
layer which no longer represent the true IOCTL code.
Match the implementation of the Linux kernel and ensure that only 32
bits of the ioctl request are used by assigning to an unsigned int.
Link: https://github.com/Motion-Project/motion/discussions/1636
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|