Age | Commit message (Collapse) | Author |
|
Convert the y_table_item_t to a Span and use that for iteration when
estimating variance of the table.
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Tested-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Tested-by: Kate Hsuan <hpa@redhat.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The af statistics can be accessed directly from the mapped buffer.
Remove the redundant memcpy, and simplify the call to
afEstimateVariance().
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Tested-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Tested-by: Kate Hsuan <hpa@redhat.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Use our geometry classes for Rectangle, Size and Point to identify
the region of interest for the autofocus, and center it on the BDS output.
This will facilitate custom ROI being passed in through controls at a later
time.
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Tested-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Tested-by: Kate Hsuan <hpa@redhat.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
A selection of constants are imported from ChromiumOS.
Move these out of the header, and simplify their documentation. Further
more, add a direct reference to the location they were obtained from.
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Tested-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Tested-by: Kate Hsuan <hpa@redhat.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Provide enforcement of the selection of the block_{width,height}_log2
parameters to the capabilities of the hardware.
While this selection is currently hardcoded to the minimum, providing
the restriction now allows for further dynamic sizing in the future and
documents the restrictions directly in code, making use of the already
existing constants.
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
|
|
The configure() function has a local configuration variable referencing
context.configuration for the purpose of shortening lines. Use it
instead of context.configuration in the remaining locations, and
constify it while at it as the configuration isn't meant to be modified.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The frame count is used to skip the gain and exposure filtering when
starting. It thus needs to be reset when configuring the algorithm, to
avoid slower convergence when stopping and restarting.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Since VCM for surface Go 2 (dw9719) had been successfully driven, this
Af module can be used to control the VCM and determine the focus value
based on the IPU3 AF state.
Based on the values from the IPU3 AF buffer, the variance of each focus
step is determined and a greedy approach is used to find the maximum
variance of the AF state and an appropriate focus value.
The grid configuration is implemented as a context. Also, the grid
parameter- AF_MIN_BLOCK_WIDTH is set to 4 (default is 3) since if the
default value is used, x_start (x_start > 640) will be at an incorrect
location of the image (rightmost of the sensor).
Signed-off-by: Kate Hsuan <hpa@redhat.com>
Tested-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The gain values are coded as u3.13 fixed point values, ie they can not
be more than 8. Clamp the values in order to avoid any off limits value
which could make the IPU3 behave in a weird manner.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
|
|
Instead of having a local cached value for line duration, store it in
the IPASessionConfiguration::sensor structure.
While at it, configure the default analogue gain and shutter speed to
controlled fixed values.
The latter is set to be 10ms as it will in most cases be close to the
one needed, making the AGC faster to converge.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
|
|
When the current exposure value is calculated, it is cached and used by
filterExposure(). Use private filteredExposure_ and pass currentExposure
as a parameter.
In order to limit the use of filteredExposure_, return the value from
filterExposure().
While at it, remove a stale comment.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
|
|
Use the new utils::abs_diff() function where appropriate to replace
manual implementations.
While at it fix a header ordering issue in
src/libcamera/pipeline/raspberrypi/raspberrypi.cpp.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
|
|
The algorithms are using the same function names with specialized
parameters. Instead of duplicating code, introduce a libipa Algorithm
class which implements a base class with template parameters in libipa,
and use it in each IPA.
As we now won't need an algorithm class for each IPA, move the
documentation to libipa, and make it agnostic of the IPA used. While at
it, fix the IPU3::Algorithm::Awb documentation.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
|
|
Remove the verbose #ifndef/#define/#endif pattern for maintaining
header idempotency, and replace it with a simple #pragma once.
This simplifies the headers, and prevents redundant changes when
header files get moved.
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
|
|
The relative luminance is calculated using an iterative process to
account for saturation in the sensor, as multiplying pixels by a gain
doesn't increase the relative luminance by the same factor if some
regions are saturated. Relative luminance estimation doesn't apply a
saturation, which produces a value that doesn't match what the sensor
will output, and defeats the point of the iterative process. Fix it.
Fixes: f8f07f9468c6 ("ipa: ipu3: agc: Improve gain calculation")
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Tested-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Tested-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The inter-quantile mean is a value that is computed as part of the AGC
run. It doesn't need to be stored in a member variable. Return it from
measureBrightness(), which makes the flow of data easier to follow.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The "current" prefix in the currentYGain variable name is confusing:
- In Agc::estimateLuminance(), the variable contains the gain to be
applied to the image, which is neither a "current" gain nor a "Y"
gain. Rename it to "gain".
- In Agc::computeExposure(), the variable contains the gain computed by
the relative luminance method, so rename it to "yGain".
While at it, rename variables to match the libcamera coding style.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The AGC computes the average relative luminance of the frame and calls
the value "normalized luma", "brightness" or "initialY". The latter is
the most accurate term, as the relative luminance is abbreviated Y, but
the "initial" prefix isn't accurate.
Standardize the vocabulary on "relative luminance" in code and comments,
abbreviating it to Y when needed.
While at it, rename variables to match the libcamera coding style.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The kMaxLuminance constant is badly named, it's not a maximum luminance,
but the maximum integer value output by the AWB statistics engine for
per-channel averages. The constant is used in a single place, hardcoding
the value is actually more readable.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Until commit f8f07f9468c6 (ipa: ipu3: agc: Improve gain calculation)
the gain to apply on the exposure value was only using the histogram.
Now that the global brightness of the frame is estimated too, we don't
need to remove part of the saturated pixels from the equation anymore.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Tested-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The AWB estimates the color temperature, but it is not used at all. It
can be useful for debug purpose at least, but also for lux estimation
later, to be able to know the temperature estimated for a given frame.
Add a new member to the IPAFrameContext::awb for this purpose, and
update the value in AWB.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
|
|
The minimum and maximum exposure are stored in lines. Replace it by
values in time to simplify the calculations.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
|
|
Previously, the exposure value was calculated based on the estimated
shutter time and gain applied. Now that we have the real values for the
current frame, use those before estimating the next one and rename the
variable accordingly.
As the exposure value is updated in the beginning of the computation,
there is no need to initialize effectiveExposureValue anymore in the
configure call, and it can be a local variable and not a class variable
anymore.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
|
|
When an image is partially saturated, its brightness is not increasing
linearly when the shutter time or gain increases. It is a big issue with
a backlight as the algorithm is fading to darkness right now.
Introduce a function to estimate the brightness of the frame, based on
the current exposure/gain and loop on it several times to estimate it
again and approach the non linear function.
Inspired-by: 7de5506c30b3 ("libcamera: src: ipa: raspberrypi: agc: Improve gain update calculation for partly saturated images")
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
|
|
When we compute the new gain, we use the iqMean_ and estimate an
exposure value gain to apply. Return early when the gain is less than
1%.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
|
|
Now that we have the real exposure applied at each frame, remove the
early return based on a frame counter and compute the gain for each
frame.
Introduce a number of startup frames during which the filter speed is
1.0, meaning we apply instantly the exposure value calculated and not a
slower filtered one. This is used to have a faster convergence, and
those frames may be dropped in a future development to hide the
convergance process from the viewer.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
|
|
When the histogram is calculated, we check if a cell is saturated or not
before cumulating its green value. This is wrong, and it can lead to an
empty histogram in case of a fully saturated frame.
Use a constant to limit the amount of pixels within a cell before
considering it saturated. If at the end of the loop we still have an
empty histogram, then make it a fully saturated one.
Bug: https://bugs.libcamera.org/show_bug.cgi?id=84
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
|
|
The pipeline handler populates the new sensorControls ControlList, to
have the effective exposure and gain values for the current frame. This
is done when a statistics buffer is received.
Make those values the frameContext::sensor values for the frame when the
EventStatReady event is received.
AGC also needs to use frameContext.sensor as its input values and
frameContext.agc as its output values. Modify computeExposure by passing
it the frameContext instead of individual exposure and gain values.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
|
|
In case the maximum exposure received from the sensor is very high, we
can have a very high shutter speed with a small analogue gain, and it
may result in very slow framerate. We are not really supporting it for
the moment, so clamp the shutter speed to an arbitrary value of 60ms.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Tested-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The struct RGB and struct AwbStatus are used only by the internal
implementation of the AWB algorithm module.
Move them into the private class declaration.
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
|
|
The AWB AwbStatus structure is contained within the Awb class.
Fix the Doxygen reference so that it can be found.
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
|
|
The tone mapping algorithm is currently undocumented.
Provide an introduction and overview to the implementation as the class
definition and document how the algorithm operates in the process and
prepare methods.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Let the algorithm perform its initial configuration. Implement
configure() to set a default gamma value and let process do the updates
needed.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The tone mapping algorithm calculates the gamma curve for every frame,
regardless of whether the gamma value has changed or not. This issue is
exasperated as we currently hardcode the gamma to a single value.
Optimise the implementation to only recalculate the look up table when
the gamma setting is changed, and store the gamma setting of the LUT
curve as part of the IPA context.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The AGC class was not documented while developing. Extend that to
reference the origins of the implementation, and improve the
descriptions on how the algorithm operates internally.
While at it, rename the functions which have bad names.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Now that we moved the diagram into the AWB class documentation, reword the
accumulator documentation to make it clear it is not meant to be used
only in AWB.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The AWB algorithm is based on the Grey world algorithm and uses the
statistics generated by the ImgU for that. Explain how it uses those,
and reference the original algorithm at the same time.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Instead of using constants for the analogue gains limits, use the
minimum and maximum from the configured sensor.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
We currently control the exposure value by the shutter speed and the
analogue gain. We can't use the digital gain to have more than the
maximum exposure value calculated because we are not controlling it.
Remove unused code associated with this digital gain.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Simplify the reading by removing one level of indentation to return
early when the change is small between two calls.
Reword the LOG() message when we are correctly exposed, and move the
lastFrame_ variable to update it even if the change is small.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
We need to calculate the gain on the previous exposure value calculated.
Now that we initialise the exposure and gain values in configure(), we
know the initial exposure value, and we can set it before any loop is
running.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
We have mixed terms between gain, analogue gain and the exposure value
gain.
Make it clear when we are using the analogue gain from the sensor, and
when we are using the calculated gain to be applied to the exposure
value to reach the target.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Until now, the algorithm makes complex assumptions when dividing the
exposure and analogue gain values. Instead, use a simpler clamping of
the shutter speed first, and then of the analogue gain, based on the
limits configured.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
We are filtering the exposure value to limit the gain to apply, but we
are not using the result.
Fix it.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The gains are currently set as a uint32_t while the analogue gain is
passed as a double. We also have a default maximum analogue gain of 15
which is quite high for a number of sensors.
Use a maximum value of 8 which should really be configured by the IPA
and not fixed as it is now. While at it make it a double.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
We are using arbitrary constants for the exposure limit in a number of
lines.
Instead of using static constants for those, use the limits of the
sensor passed in IPASessionConfiguration and cache those.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The exposure value is filtered in filterExposure() using the
currentExposure_ and setting a prevExposure_ variable. This is misnamed
as it is not the previous exposure, but a filtered value.
Rename it accordingly.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
When zones are used for the grey world algorithm, they are only
considered if their average green value is at least 32/255 to exclude
zones that are too dark and don't provide relevant colour information
(on the opposite side of the spectrum, saturated regions are excluded by
the ImgU statistics engine).
The algorithm requires a minimal number of zones that meet this criteria
in order to run. Now that we correct the black level, the 32/255 minimal
value is a bit high and prevents the algorithm for running in low-light
conditions. Lower the value to 16/255 to fix it.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Acked-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The AWB grey world algorithm tries to find a grey value and it can't do
it on over-exposed images. To exclude those, the saturation ratio is
used for each cell, and the cell is included only if this ratio is 0.
Now that we have changed the threshold, more cells may be considered as
partially saturated and excluded, preventing the algorithm from running
efficiently.
Change that behaviour, and consider 90% as a good enough ratio.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The AGC frame context needs to be initialised correctly for the first
iteration. Until now, the IPA uses the minimum exposure and gain values
and caches those in local variables.
In order to give the sensor limits to AGC, create a new structure in
IPASessionConfiguration. Store the exposure in time (and not line
duration) and the analogue gain after CameraSensorHelper conversion.
Set the gain and exposure appropriately to the current values known to
the IPA and remove the setting of exposure and gain in IPAIPU3 as those
are now fully controlled by IPU3Agc.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|