Age | Commit message (Collapse) | Author |
|
The terms "shutter" and "shutter speed" are used through libcamera to
mean "exposure time". This is confusing, both due to "speed" being used
as "time" while it should be the inverse (i.e. a maximum speed should
correspond to the minimum time), and due to "shutter speed" and
"exposure time" being used in different places with the same meaning.
To improve clarity of the code base and the documentation, use "exposure
time" consistently to replace "shutter speed".
This rename highlighted another vocabulary issue in libcamera. The
ExposureModeHelper::splitExposure() function used to document that it
splits "exposure time into shutter time and gain". It has been reworded
to "split exposure into exposure time and gain". That is not entirely
satisfactory, as "exposure" has a defined meaning in photography (see
https://en.wikipedia.org/wiki/Exposure_(photography)) that is not
expressed as a duration. This issue if left to be addressed separately.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Acked-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Use the centralised libipa helpers instead of open coding common
functions.
Signed-off-by: Daniel Scally <dan.scally@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Milan Zamazal <mzamazal@redhat.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The LSP autoformatter doesn't like some of the current formatting, let's
make it happier. Note that not all of its suggestions were accepted
because readability is preferred and adjusting .clang-format may not be
easy or possible.
Signed-off-by: Milan Zamazal <mzamazal@redhat.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The includes that are not used can be removed.
Signed-off-by: Milan Zamazal <mzamazal@redhat.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Source files in libcamera start by a comment block header, which
includes the file name and a one-line description of the file contents.
While the latter is useful to get a quick overview of the file contents
at a glance, the former is mostly a source of inconvenience. The name in
the comments can easily get out of sync with the file name when files
are renamed, and copy & paste during development have often lead to
incorrect names being used to start with.
Readers of the source code are expected to know which file they're
looking it. Drop the file name from the header comment blocks in all
remaining locations that were not caught by the automated script as they
are out of sync with the file name.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Daniel Scally <dan.scally@ideasonboard.com>
|
|
Now that the IPU3's Agc is derived from MeanLuminanceAgc we can
delete all the unecessary bespoke functions.
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: Stefan Klug <stefan.klug@ideasonboard.com>
Signed-off-by: Daniel Scally <dan.scally@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
In preparation for switching to a derivation of AgcMeanLuminance, add
a function to parse and store the statistics for easy retrieval in an
overriding estimateLuminance() function.
Now that we have a MeanLuminanceAgc class that centralises our AEGC
algorithm, derive the IPU3's Agc class from it and plumb in the
necessary framework to enable it to be used. For simplicity's sake
this commit switches the algorithm to use the derived class, but
does not remove the bespoke functions at this time.
Reviewed-by: Stefan Klug <stefan.klug@ideasonboard.com>
Signed-off-by: Daniel Scally <dan.scally@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
As the sensor's analogue gain range is known, drop the arbitrary
maximum limit for the sensor analogue gain.
Signed-off-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Daniel Scally <dan.scally@ideasonboard.com>
Tested-by: Daniel Scally <dan.scally@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Fill the frame metadata in the AGC and AWB algorithm's prepare()
function. This removes the need to fill metadata manually in the IPA
module's processStatsBuffer() function.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Extend the Algorithm::process() function with a metadata control list,
to be filled by individual algorithms with frame metadata. Update the
rkisp1 and ipu3 IPA modules accordingly, and drop the dead code in the
IPARkISP1::prepareMetadata() function while at it.
This only creates the infrastructure, filling metadata in individual
algorithms will be handled separately.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
Pass the frame number of the current frame being processed.
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
Frame contexts will become the core component of IPA modules, always
available to functions of the algorithms. To indicate and prepare for
this, turn the frame context pointer passed to Algorithm::process() into
a reference.
The RkISP1 IPA module doesn't use frame contexts yet, so pass a dummy
context for now.
While at it, drop an unneeded [[maybe_unused]] from Agc::process() and
add a missing parameter documentation for the frameContext argument to
Awb::process().
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
To prepare for dynamic instantiation of algorithms from the tuning file,
register the algorithms with the Module class.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The doxygen '\todo' directory doesn't need to be followed by a colon,
yet a few strayed occurrences have made their way in. Fix them.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
|
|
Instead of having one frame context constantly being updated,
this patch aims to introduce per-frame IPAFrameContext which
are stored in a ring buffer. Whenever a request is queued, a new
IPAFrameContext is created and inserted into the ring buffer.
The IPAFrameContext structure itself has been slightly extended
to store a frame id and a ControlList for incoming frame
controls (sent in by the application). The next step would be to
read and set these controls whenever the request is actually queued
to the hardware.
Since now we are working in multiples of IPAFrameContext, the
Algorithm::process() will actually take in a IPAFrameContext pointer
(as opposed to a nullptr while preparing for this change).
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Currently we have a single structure of IPAFrameContext but
subsequently, we shall have a ring buffer (or similar) container
to keep IPAFrameContext structures for each frame.
It would be a hassle to query out the frame context required for
process() (since they will reside in a ring buffer) by the IPA
for each process. Hence, prepare the process() libipa template to
accept a particular IPAFrameContext early on.
As for this patch, we shall pass in the pointer as nullptr, so
that the changes compile and keep working as-is.
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Currently, IPAFrameContext consolidates the values computed by the
active state of the algorithms, along with the values applied on
the sensor.
Moving ahead, we want to have a frame context associated with each
incoming request (or frame to be captured). This shouldn't necessarily
be tied to "active state" of the algorithms hence:
- Rename current IPAFrameContext -> IPAActiveState
This will now reflect the latest active state of the algorithms and
has nothing to do with any frame-related ops/values.
- Re-instate IPAFrameContext with a sub-structure 'sensor' currently
storing the exposure and gain value.
Adapt the various access to the frame context to the new changes
as described above.
Subsequently, the re-instated IPAFrameContext will be extended to
contain a frame number and ControlList to remember the incoming
request controls provided by the application. A ring-buffer will
be introduced to store these frame contexts for a certain number
of frames.
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The configure() function has a local configuration variable referencing
context.configuration for the purpose of shortening lines. Use it
instead of context.configuration in the remaining locations, and
constify it while at it as the configuration isn't meant to be modified.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The frame count is used to skip the gain and exposure filtering when
starting. It thus needs to be reset when configuring the algorithm, to
avoid slower convergence when stopping and restarting.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Instead of having a local cached value for line duration, store it in
the IPASessionConfiguration::sensor structure.
While at it, configure the default analogue gain and shutter speed to
controlled fixed values.
The latter is set to be 10ms as it will in most cases be close to the
one needed, making the AGC faster to converge.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
|
|
When the current exposure value is calculated, it is cached and used by
filterExposure(). Use private filteredExposure_ and pass currentExposure
as a parameter.
In order to limit the use of filteredExposure_, return the value from
filterExposure().
While at it, remove a stale comment.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
|
|
Use the new utils::abs_diff() function where appropriate to replace
manual implementations.
While at it fix a header ordering issue in
src/libcamera/pipeline/raspberrypi/raspberrypi.cpp.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
|
|
The relative luminance is calculated using an iterative process to
account for saturation in the sensor, as multiplying pixels by a gain
doesn't increase the relative luminance by the same factor if some
regions are saturated. Relative luminance estimation doesn't apply a
saturation, which produces a value that doesn't match what the sensor
will output, and defeats the point of the iterative process. Fix it.
Fixes: f8f07f9468c6 ("ipa: ipu3: agc: Improve gain calculation")
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Tested-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Tested-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The inter-quantile mean is a value that is computed as part of the AGC
run. It doesn't need to be stored in a member variable. Return it from
measureBrightness(), which makes the flow of data easier to follow.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The "current" prefix in the currentYGain variable name is confusing:
- In Agc::estimateLuminance(), the variable contains the gain to be
applied to the image, which is neither a "current" gain nor a "Y"
gain. Rename it to "gain".
- In Agc::computeExposure(), the variable contains the gain computed by
the relative luminance method, so rename it to "yGain".
While at it, rename variables to match the libcamera coding style.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The AGC computes the average relative luminance of the frame and calls
the value "normalized luma", "brightness" or "initialY". The latter is
the most accurate term, as the relative luminance is abbreviated Y, but
the "initial" prefix isn't accurate.
Standardize the vocabulary on "relative luminance" in code and comments,
abbreviating it to Y when needed.
While at it, rename variables to match the libcamera coding style.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The kMaxLuminance constant is badly named, it's not a maximum luminance,
but the maximum integer value output by the AWB statistics engine for
per-channel averages. The constant is used in a single place, hardcoding
the value is actually more readable.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Until commit f8f07f9468c6 (ipa: ipu3: agc: Improve gain calculation)
the gain to apply on the exposure value was only using the histogram.
Now that the global brightness of the frame is estimated too, we don't
need to remove part of the saturated pixels from the equation anymore.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Tested-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The minimum and maximum exposure are stored in lines. Replace it by
values in time to simplify the calculations.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
|
|
Previously, the exposure value was calculated based on the estimated
shutter time and gain applied. Now that we have the real values for the
current frame, use those before estimating the next one and rename the
variable accordingly.
As the exposure value is updated in the beginning of the computation,
there is no need to initialize effectiveExposureValue anymore in the
configure call, and it can be a local variable and not a class variable
anymore.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
|
|
When an image is partially saturated, its brightness is not increasing
linearly when the shutter time or gain increases. It is a big issue with
a backlight as the algorithm is fading to darkness right now.
Introduce a function to estimate the brightness of the frame, based on
the current exposure/gain and loop on it several times to estimate it
again and approach the non linear function.
Inspired-by: 7de5506c30b3 ("libcamera: src: ipa: raspberrypi: agc: Improve gain update calculation for partly saturated images")
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
|
|
When we compute the new gain, we use the iqMean_ and estimate an
exposure value gain to apply. Return early when the gain is less than
1%.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
|
|
Now that we have the real exposure applied at each frame, remove the
early return based on a frame counter and compute the gain for each
frame.
Introduce a number of startup frames during which the filter speed is
1.0, meaning we apply instantly the exposure value calculated and not a
slower filtered one. This is used to have a faster convergence, and
those frames may be dropped in a future development to hide the
convergance process from the viewer.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
|
|
When the histogram is calculated, we check if a cell is saturated or not
before cumulating its green value. This is wrong, and it can lead to an
empty histogram in case of a fully saturated frame.
Use a constant to limit the amount of pixels within a cell before
considering it saturated. If at the end of the loop we still have an
empty histogram, then make it a fully saturated one.
Bug: https://bugs.libcamera.org/show_bug.cgi?id=84
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
|
|
The pipeline handler populates the new sensorControls ControlList, to
have the effective exposure and gain values for the current frame. This
is done when a statistics buffer is received.
Make those values the frameContext::sensor values for the frame when the
EventStatReady event is received.
AGC also needs to use frameContext.sensor as its input values and
frameContext.agc as its output values. Modify computeExposure by passing
it the frameContext instead of individual exposure and gain values.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
|
|
In case the maximum exposure received from the sensor is very high, we
can have a very high shutter speed with a small analogue gain, and it
may result in very slow framerate. We are not really supporting it for
the moment, so clamp the shutter speed to an arbitrary value of 60ms.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Tested-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The AGC class was not documented while developing. Extend that to
reference the origins of the implementation, and improve the
descriptions on how the algorithm operates internally.
While at it, rename the functions which have bad names.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Instead of using constants for the analogue gains limits, use the
minimum and maximum from the configured sensor.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
We currently control the exposure value by the shutter speed and the
analogue gain. We can't use the digital gain to have more than the
maximum exposure value calculated because we are not controlling it.
Remove unused code associated with this digital gain.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Simplify the reading by removing one level of indentation to return
early when the change is small between two calls.
Reword the LOG() message when we are correctly exposed, and move the
lastFrame_ variable to update it even if the change is small.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
We need to calculate the gain on the previous exposure value calculated.
Now that we initialise the exposure and gain values in configure(), we
know the initial exposure value, and we can set it before any loop is
running.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
We have mixed terms between gain, analogue gain and the exposure value
gain.
Make it clear when we are using the analogue gain from the sensor, and
when we are using the calculated gain to be applied to the exposure
value to reach the target.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Until now, the algorithm makes complex assumptions when dividing the
exposure and analogue gain values. Instead, use a simpler clamping of
the shutter speed first, and then of the analogue gain, based on the
limits configured.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
We are filtering the exposure value to limit the gain to apply, but we
are not using the result.
Fix it.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The gains are currently set as a uint32_t while the analogue gain is
passed as a double. We also have a default maximum analogue gain of 15
which is quite high for a number of sensors.
Use a maximum value of 8 which should really be configured by the IPA
and not fixed as it is now. While at it make it a double.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
We are using arbitrary constants for the exposure limit in a number of
lines.
Instead of using static constants for those, use the limits of the
sensor passed in IPASessionConfiguration and cache those.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The exposure value is filtered in filterExposure() using the
currentExposure_ and setting a prevExposure_ variable. This is misnamed
as it is not the previous exposure, but a filtered value.
Rename it accordingly.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The AGC frame context needs to be initialised correctly for the first
iteration. Until now, the IPA uses the minimum exposure and gain values
and caches those in local variables.
In order to give the sensor limits to AGC, create a new structure in
IPASessionConfiguration. Store the exposure in time (and not line
duration) and the analogue gain after CameraSensorHelper conversion.
Set the gain and exposure appropriately to the current values known to
the IPA and remove the setting of exposure and gain in IPAIPU3 as those
are now fully controlled by IPU3Agc.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
"using" directives are harmful in headers, as they propagate the
namespace short-circuit to all files that include the header, directly
or indirectly. Drop the directive from agc.h, and use utils::Duration
explicitly. While at it, shorten the namespace qualifier from
libcamera::utils:: to utils:: in agc.cpp for Duration.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
|
|
The intel-ipu3.h public interface from the kernel does not define how to
parse the statistics for a cell. This had to be identified by a process
of reverse engineering, and later identifying the structures from [0]
leading to our custom definition of struct Ipu3AwbCell.
[0]
https://chromium.googlesource.com/chromiumos/platform/arc-camera/+/refs/heads/master/hal/intel/include/ia_imaging/awb_public.h
To improve the kernel interface, a proposal has been made to the
linux-kernel [1] to incorporate the memory layout for each cell into the
intel-ipu3 header directly.
[1]
https://lore.kernel.org/linux-media/20211005202019.253353-1-jeanmichel.hautbois@ideasonboard.com/
Update our local copy of the intel-ipu3.h to match the proposal and
change the AGC and AWB algorithms to reference that structure directly,
allowing us to remove the deprecated custom Ipu3AwbCell definition.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|