Age | Commit message (Collapse) | Author |
|
The statistics buffer 'ipu3_uapi_awb_raw_buffer' stores the ImgU
calculation results in a buffer aligned horizontally to a multiple of 4
cells. The AWB loop should take care of it to add the proper offset
between lines and avoid any staircase effect.
It is no longer required to pass the grid configuration context to the
private functions called from process() which simplifies the code flow.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The gains have a precision u3.13, range [0, 8[ which means that a gain
multiplier value of 1.0 is represented as a multiplication by 8192 in
the ImgU. Correct the gains as this was misunderstood in the first
place.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The algorithm uses the statistics of a cell only if there is not too
much saturated pixels in it. The grey world algorithm works fine when
there are a limited number of outliers.
Consider a valid zone to be at least 80% of unsaturated cells in it.
This value could very well be configurable, and make the algorithm more
or less tolerant.
While at it, implement it in a configure() call as it will not change
during execution, and cache the cellsPerZone values estimated with
std::round as we are using cmath.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The loops over the width and height of the image when calculating the
BDS grid parameters are nested, but they're actually independent. Split
them to reduce the complexity.
While at it, split out the constants to documented const expressions
for the grid sizes.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Until now, the limits used to calculate the grid based on the Bayer Down
Scaler configuration where taken from the kernel documentation [0].
While testing and understanding the format of the ImgU statistics, it
appears that the ones defined in CrOS [1] are the correct ones. Use
those.
[0] https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/pixfmt-meta-intel-ipu3.html?highlight=v4l2_meta_fmt_ipu3_params#intel-ipu3-imgu-uapi-data-types
[1] https://chromium.googlesource.com/chromiumos/platform/arc-camera/+/refs/heads/master/hal/intel/include/ia_imaging/awb_public.h
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The variables mix the terms cell, region and zone. It can confuse the
reader, and make the algorithm more difficult to follow. Rename the
local variables to be consistent with their definitions:
- Cells are defined in Pixels
- Zones are defined in Cells
There is no "region" as such, so replace it with the correct term.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The pixel component sums for the Accumulator are inconsistent with other
similar structures such as the IPAFrameContext::awb::gains. Group the
red, green, and blue sums together in a struct and store them as
uint64_t to reduce potential architectural differences.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The IspStatsRegion structure was introduced as an attempt to prepare for
a generic AWB algorithm structure. The structure name by itself is not
explicit and it is too optimistic to try and make a generic one for now.
Its role is to accumulate the pixels in a given zone. Rename it to
accumulator, and remove the uncounted field at the same time. It is
always possible to know how many pixels are not relevant for the
algorithm by calculating total-counted. The uncounted field was only
declared and not used. Amend the documentation accordingly.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The structure Ipu3AwbCell describes the AWB stats layout on the kernel
side. We will need it to be used by the AGC algorithm to be introduced
later, so let's make it visible from ipa::ipu3::algorithms and not only
for the AWB class.
The IspStatsRegion will be needed by AGC too, so let's move it in the
same namespace too.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The IPA::configure() function has an IPAConfigInfo parameters which
contains a map of numerical indexes to ControlInfoMap instances.
This is a leftover of the old IPA protocol, where it was not possible to
specify a rich interface as it is possible today and each entity
ControlInfoMap was indexed by a numerical id and stored in a map.
Now that the IPA interface allows to specify parameters by name, drop the
map and send the sensor's control info map only.
If we'll need more ControlInfoMap to be shared with the IPA, a new parameter
can be added to IPAConfigInfo.
Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The incoming params buffer may contain uninitialised data, or the
parameters of previously queued frames. Clearing the entire buffer
may be an expensive operation, and the kernel will only read from
structures which have their associated use-flag set.
It is the responsibility of the algorithms to set the use flags
accordingly for any data structure they update during prepare().
Clear the use flags of the parameter buffer before passing the buffer
to the algorithms during their prepare() operations.
Reviewed-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Tested-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
"weighted", derived from the verb "to weight", comes from Middle English
weight, weiȝte, weght, wight, from Old English wiht, ġewiht, from
Proto-Germanic *wihtiz, from Proto-Indo-European *weǵʰ-. In none of
those does the t come before the h.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
|
|
MappedFrameBuffer::maps() returns planes_. This renames the function
name to planes().
Signed-off-by: Hirokazu Honda <hiroh@chromium.org>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
IPABuffer is represented by FrameBuffer. FrameBuffer::Plane has
now an offset. This uses the offset variable to map the IPABuffer.
The mapped IPABuffer is represented and managed as MappedFrameBuffer.
Signed-off-by: Hirokazu Honda <hiroh@chromium.org>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The IPASessionConfiguration now has the grid configuration stored. Use
it it at prepare() and process() calls in AWB and pass it as a reference
to the private functions when needed.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The IPASessionConfiguration now has the grid configuration stored. Use
it at process() call in AGC and pass it as a reference to the private
functions when needed.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Now that the interface is properly used by the AGC class, move it into
ipa::ipu3::algorithms and let the loops do the calls.
As we need to exchange the exposure_ and gain_ by passing them through the
FrameContext, use the calculated values in setControls() function to
ease the reading.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Now that the interface is properly used by the AWB class, move it into
ipa::ipu3::algorithms and let the loops do the calls.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
In preparation for using the AGC through the new algorithm interfaces,
convert the existing code to use the new function types.
Now that the process call is rewritten, re-enable the compiler flag to
warn when a function declaration hides virtual functions from a base class
(-Woverloaded-virtual).
We never use converged_ so remove its declaration. The controls may not
need to be updated at each call, but it should be decided on the context
side and not by a specific call by using a lock status in the Agc
structure for instance.
As the params_ local variable is not useful anymore, remove it here
too.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
When the stats are received, pass them with the context to the existing
AWB algorithm. IPAFrameContext now has a new structure to store the
gains calculated by the AWB algorithm.
When an EventFillParams event is received, call prepare() and set the new
gains accordingly in the params structure.
There is no more a need for the IPU3Awb::initialise() function, as the
params are always set in prepare().
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Introduce a new algorithm to manage the tone mapping handling of the
IPU3.
The initial algorithm is chosen to configure the gamma contrast curve
which moves the implementation out of AWB for simplicity. As it is
initialised with a default gamma value of 1.1, there is no need to use
the default table at initialisation anymore.
This demonstrates the way to use process() call when the EventStatReady
comes in. The function calculates the LUT in the context of a frame, and
when prepare() is called, the parameters are filled with the updated
values.
AGC is modified to take the new process interface into account.
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Implement a new modular framework for algorithms with a common context
structure that is passed to each algorithm through a common API.
This patch:
- removes all the local references from IPAIPU3 and uses IPAContext
- implements the list of pointers and the loop at configure call on each
algorithm
- loops in fillParams on each prepare() call on the algorithm list
- loops in prepareStats on each process() call on the algorithm list
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Introduce three functions in the Algorithm class to manage algorithms:
- configure which is called when IPA is configured only
- prepare called on EventFillParams event at each frame when the request
is queued
- process called on EventStatReady event at each frame completion when
the statistics have been generated.
The existing AGC implementation already has a function named process(),
though it has different arguments. Adding the new virtual process()
interface causes a compiler warning due to the AGC implementation
overloading a virtual function, even though the overload can be resolved
correctly.
Temporarily disable the warning in this commit to maintain bisection
until the AGC is converted to the new interface.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
An increasing amount of data and information needs to be shared between
the components that build up to implement image processing algorithms.
Create a context structure which will allow us to work towards calling
algorithms in a modular way, and sharing information between the modules.
The IPA context is a global context set at configure time
(IPASessionConfiguration) and a per-frame context (IPAFrameContext) used
while streaming.
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The abstract Algorithm class was originally placed in libipa as an
attempt define a generic algorithm container. This was a little
optimistic and pushed a bit far too early.
Move the Algorithm class into the IPU3 which is the only user of the
class, as we adapt it to support modular algorithm components for the
IPU3.
Not documenting the namespace may cause issues with Doxygen in libipa.
The file libipa.cpp is thus created as an empty file for now, but we
can leverage it in the future to add more global libipa documentation,
and possibly code too.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The bcm2835-isp.h is included with quotes rather than
angle brackets.
Quoted includes is reserved for internal includes, while the
linux/bcm2835-isp.h header is exported from the Linux kernel.
Fix the inclusion type.
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Plumb through VIMC mojo interface to enable buffers passing.
VIMC does not have parameters or statistics buffers but we can
mimick the typical case of passing IPA buffers from pipeline
handler to IPA using mock buffers. The mock IPA buffers are
FrameBuffers which are dmabuf backed (in other words, mmap()able
through MappedFramebuffer inside the IPA).
This commits shows:
- Passing the parameter buffer from the pipeline handler to
the IPA through functions defined in mojom interface.
- Passing request controls ControlList to the IPA.
Any tests using VIMC will now loop in the IPA paths. Any tests running
in isolated mode will help us to test IPA IPC code paths especially
around (de)serialization of data passing from pipeline handlers to the
IPA. Future IPA interface tests can simply extend the vimc mojom
interface to achieve/test a specific use case as required.
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
VIMC pipeline handler has dmabuf-backed mock FrameBuffers which are
specifically targetted mimicking IPA buffers (parameter and statistics).
Map these mock buffers to the VIMC IPA that would enable exercising IPA
IPC code paths. This will provide leverage to our test suite to test
IPA IPC code paths, which are common to various platforms.
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
As part of an effort to make the vimc IPA usable for testing, extend it
with a configure function. The configuration is currently ignored by the
IPA.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
|
|
"NOIR" modules are ones that have had the IR filters removed but are
otherwise identical. The same tuning can be used as for the regular
version except that the colour calibration supplied to the AWB
algorithm no longer works. Instead we need to switch the algorithm to
its basic "grey world" method.
Users with "NOIR" modules can switch to the matching "xxx_noir.json"
tuning file by using the LIBCAMERA_RPI_TUNING_FILE environment
variable.
Signed-off-by: David Plowman <david.plowman@raspberypi.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Tidy-up a bit the inclusions directive in the IPU3 IPA module.
In detail:
- ipu3.cpp is missing inclusions for:
std::abs from <cmath>
std::map from <map>
std::min/max from <algorithm>
std::numeric_limits from <limits>
std::unique_ptr from <memory>
std::vector from <vector>
and does not require <sys/mman.h>
- ipu3_agc has two not used inclusions in the header file and one the cpp file
and is missing <chrono> for std::literals::chrono_literals
- ipu3_awb is missing <algorithm> for std::sort and does not use
<numeric> or <unordered_map>
Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
|
|
All the IPU3 Camera controls are currently initialized by the pipeline
handler which initializes them using the camera sensor configuration and
platform specific requirements.
However, some controls are better initialized by the IPA, which might,
in example, cap the exposure times and frame duration to the constraints
of its algorithms implementation.
Also, moving forward, the IPA should register controls to report its
capabilities, in example the ability to enable/disable 3A algorithms on
request.
Move the existing controls initialization to the IPA, by providing
the sensor configuration and its controls to the IPU3IPA::init()
function, which initializes controls and returns them to the pipeline
through an output parameter.
The existing controls initialization has been copied verbatim from the
pipeline handler to the IPA, if not a for few line breaks adjustments
and the resulting Camera controls values are not changed.
Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
|
|
Remove the need for callers to reference PROT_READ/PROT_WRITE directly
from <sys/mman.h> by instead exposing the Read/Write mapping options as
flags from the MappedFrameBuffer class itself.
While here, introduce the <stdint.h> header which is required for the
uint8_t as part of the Plane.
Reviewed-by: Hirokazu Honda <hiroh@chromium.org>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The MappedFrameBuffer is a convenience feature which sits on top of the
FrameBuffer and facilitates mapping it to CPU accessible memory with
mmap.
This implementation is internal and currently sits in the same internal
files as the internal FrameBuffer, thus exposing those internals to
users of the MappedFramebuffer implementation.
Move the MappedFrameBuffer and MappedBuffer implementation to its own
implementation files, and fix the sources throughout to use that
accordingly.
Reviewed-by: Hirokazu Honda <hiroh@chromium.org>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Usage of 'method' to refer to member functions comes from Java. The C++
standard uses the term 'function' only. Replace 'method' with 'function'
or 'member function' through the whole code base and documentation.
While at it, fix two typos (s/backeng/backend/).
The BoundMethod and Object::invokeMethod() are left as-is here, and will
be addressed separately.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Acked-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
When the sensor is switched to a mode with a different sensitivity,
the target exposure values need to be adjusted proportionately to
maintain the same image brightness.
Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
We use the CamHelper class to initialise it to the usual value of 1.
The CamHelper's GetModeSensitivity method can be redefined to
implement a different behaviour for sensors that require it.
Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Acked-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Add type safety by turning the MapFlag and OpenModeFlag enum into enum
class.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
This commit adds a tuning file for the 12MP imx378 sensor. The sensor
actually shares the same driver (and CamHelper) as the imx477 so only
a new tuning file is required. The default choice of imx477.json can
be overridden by pointing LIBCAMERA_RPI_TUNING_FILE at a version of
the new imx378.json file.
Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Return controls::FrameDuration through the per-frame Request metadata.
The frame duration is obtained by either the value in DelayedControls,
or (preferably) the value parsed from the embedded data buffer.
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Add a CameraSensorHelperOv8865 class. The gain coefficients are gleaned
from the datasheet; the lowest 7 bits are reported there as fractional
bits, so real gain is val/128.
Signed-off-by: Daniel Scally <djrscally@gmail.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Extend the CameraSensorHelper factory with support for the IMX258
sensor found in the Nautilus Chromebook.
The values are read by manually tweaking the IMX258 kernel driver.
The IMX258 kernel driver hints that the sensor may be compatible
with the MIPI CCS specification, as the register set matches.
The values for analog gain constants are obtained by reading the
register indexes, corresponding to the analog gain constants, as
mentioned in MIPI CCS v1.1 specification.
The values have further been confirmed by Dave Stevenson as being
those specified in the datasheet.
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Some values for array sizes differ between v10 and v12, so set them
in init() and adjust the auto exposure algorithm to the ae value
from there.
Signed-off-by: Heiko Stuebner <heiko.stuebner@theobroma-systems.com>
Reviewed-by: Dafna Hirschfeld <dafna.hirschfeld@collabora.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
The datasheet states that the low 7 bits are fraction bits.
real_gain = GainCode/128
For example, 0x080 is 1x gain, 0x100 is 2x gain.
It means that we should have m0=1 and c1=128.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Extend the CameraSensorHelper factory with support for an
OV13858 sensor as found in the Soraka Chromebook.
The datasheet states that low 7 bits are fraction bits, so the gain is
calculated as gainCode=128*gain.
According to the formula, it means m0=1 and c1=128.
m1 then has to be 0, and c0=0.
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The Metadata class defines a shared_ptr named MetadataPtr. It is not
used anywhere in the source code, so remove it.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
|
|
s/DefaultAnalogueGain/defaultAnalogueGain/
s/DefaultExposureTime/defaultExposureTime/
Change these for consistency with the other static const variables.
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
With the recent change to allow long exposures on the imx477, the existing 100s
limit was not adequate.
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Update the imx477 CamHelper to use long exposure modes if needed.
This is done by overloading the CamHelper::GetVBlanking function to return a
frame length (and vblank value) computed using a scaling factor when the value
would be larger than what the sensor register could otherwise hold.
CamHelperImx477::Prepare is also overloaded to ensure that the "device.status"
metadata returns the right value if the long exposure scaling factor is used.
The scaling factor is unfortunately not returned back in metadata.
With the current imx477 driver, we can achieve a maximum exposure time of approx
127 seconds since the HBLANK control is read-only.
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Store the frame length into the DeviceStatus struct. The value is extracted
from embedded data when available, or calculated from the VBLANK value passed
from DelayedControls otherwise.
Update imx477 and imx219 CamHelper classes to extract the frame length from the
embedded data buffer.
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|