Age | Commit message (Collapse) | Author |
|
As reported by the CONTROL_AE_AVAILABLE_TARGET_FPS_RANGES documentation
in the Android developer reference:
"For devices advertising any color filter arrangement other than NIR, or
devices not advertising color filter arrangement, this list will always
include (min, max) and (max, max) where min <= 15 and max = the maximum
output frame rate of the maximum YUV_420_888 output size."
Collect the higher FPS of the larger YUV stream and use it with the
minimum FPS rate the camera can produce to populate the
ANDROID_CONTROL_AE_AVAILABLE_TARGET_FPS_RANGES static metadata.
Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The ANDROID_SCALER_AVAILABLE_STREAM_CONFIGURATIONS and
ANDROID_SCALER_AVAILABLE_MIN_FRAME_DURATIONS static metadata are
populated by looping on the streamConfigurations_ vector.
Unify them in a single loop to avoid repeating it.
Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Add a debug statement to print out the list of collected output stream
and their characteristics.
Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Register as preview streams only streams capable of producing at least
30 FPS.
This requirement comes from inspecting the existing HAL implementation
on Intel IPU3 platform and from inspecting the CTS RecordingTests
results.
Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
While building the list of supported stream configurations also collect
the absolute max frame durations to be used to populate the sensor
maximum frame duration.
Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
We currently hardcode 2560x1920@30FPS as the only stalling frame duration.
This is of course not correct, and all the required information to
properly populate the ANDROID_SCALER_AVAILABLE_STALL_DURATIONS static
metadata are available from initializeStaticMetadata().
Use the collected stalling durations and sizes to properly popoulate the
static property.
Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Use the per-configuration stream durations as collected during
initializeStreamConfigurations() to populate the
ANDROID_SCALER_AVAILABLE_STREAM_CONFIGURATIONS_OUTPUT static metadata.
Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
As we now collect the per-stream frame durations at
initializeStreamConfigurations() times, the Camera is now guaranteed to
support the controls::FrameDurationLimits control.
Remove the check for its presence when populating the
ANDROID_CONTROL_AE_AVAILABLE_TARGET_FPS_RANGES static metadata.
Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Now that building the list of supported stream configuration requires
applying a configuration to the Camera, re-initialize the camera
controls by applying a configuration generated for the Viewfinder stream
role before building the list of static metadata.
Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Collect the per-stream frame durations while building the list
of supported stream formats and resolutions.
In order to get an updated list of controls it is necessary to apply
to the Camera the configuration we're testing, which was so far only
validated.
The per-configuration durations will be used to populate the Android
ANDROID_SCALER_AVAILABLE_MIN_FRAME_DURATIONS static metadata.
Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
When a new CameraConfiguration is applied to the Camera the IPA is
configured as well, using the newly applied sensor configuration and its
updated V4L2 controls.
Also update the Camera controls at IPA::configure() time by re-computing
the controls::ExposureTime and controls::FrameDurationLimits limits and
update the controls on the pipeline handler side after having configured
the IPA.
Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
In order to prepare for updating the Camera controls limits when a new
camera configuration is applied, split the initControls() function in
two:
- updateControls() to actually compute controls values
- initControls() to initialize the sensor configuration and call
updateControls
Update the functions documentation accordingly.
No functional changes intended.
Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
|
|
Following the previous patch that moved all the ImgU-related contants in
the ImgUDevice class namespace and that aligned their naming scheme to
the 'kNameOfConstant' scheme, apply the same changes to the other
components of the IPU3 pipeline handler.
Cosmetic change, no functional changes intended.
Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The definition of several constants that describe the ImgU
characteristics are spread between two files: ipu3.cpp and imgu.cpp.
As the ipu3.cpp uses definitions from the imgu.cpp file, in order to
remove the usage of magic numbers, it is required to move the
definitions to a common header file where they are accessible to the
other .cpp modules.
Move all the definitions of the ImgU sizes and alignments to the
ImgUDevice class as static constexpr and update their users accordingly.
Cosmetic changes, no functional changes intended.
Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
As reported by commit 7208e70211a6 ("libcamera: ipu3: Always use sensor
full frame size") the current implementation of the IPU3 pipeline
handler always uses the sensor resolution as the ImgU input frame size in
order to work around an issue with the ImgU configuration procedure.
Now that the frame selection policy has been modified in the CIO2Device
class implementation to comply with the requirements of the ImgU
configuration script we can remove the workaround and select the most
opportune sensor size to feed the ImgU with.
Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
libcamera doesn't support interlaced formats, set the field to
V4L2_FIELD_NONE explicitly instead of relying on drivers to interpret
V4L2_FIELD_ANY the way we want it.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
When setting up a link fails, the error message doesn't specify which
link is being acted on. This makes debugging more difficult than it
should be. Improve the message by printing the link information.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
"using namespace" in a header file propagates the namespace to
the files including the header file. So it should be avoided.
This removes "using namespace" in header files in test.
Signed-off-by: Hirokazu Honda <hiroh@chromium.org>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
"using namespace" in a header file propagates the namespace to
the files including the header file. So it should be avoided.
This removes "using namespace" in header files in lc-compliance.
Signed-off-by: Hirokazu Honda <hiroh@chromium.org>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
"using namespace" in a header file propagates the namespace to
the files including the header file. So it should be avoided.
This removes "using namespace" in header files in v4l2.
Signed-off-by: Hirokazu Honda <hiroh@chromium.org>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
"using namespace" in a header file propagates the namespace to
the files including the header file. So it should be avoided.
This removes "using namespace" in header files in qcam.
Signed-off-by: Hirokazu Honda <hiroh@chromium.org>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
"using namespace" in a header file propagates the namespace to
the files including the header file. So it should be avoided.
This removes "using namespace" in stream_options.h
Signed-off-by: Hirokazu Honda <hiroh@chromium.org>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
"using namespace" in a header file propagates the namespace to
the files including the header file. So it should be avoided.
This removes "using namespace" in agc.hpp.
Signed-off-by: Hirokazu Honda <hiroh@chromium.org>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The camera session keeps requeuing requests until the capture limit is
reached. This causes more request than the limit to complete, as there's
a queue of requests in flight. When capturing from multiple cameras
concurrently, this results in the captureDone signal being emitted for
every request completion after the limit is reached, instead of once per
camera session when reaching the limit.
Fix this by simply dropping any request that completes after the limit
is reached. We could instead avoid requeuing more requests than needed
to reach the limit, but that may cause request starvation in pipelines,
which are currently not handled consistently (or correctly).
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The FileSink class constructs stream names internally the same way that
the CameraSession does, except that it fails to add the camera name.
This results in files being written without the camera name.
This could be fixed in FileSink, but we would still duplicate code to
construct stream names. Pass the stream names map from CameraSession to
FileSink instead, and store it internally.
Fixes: 02001fecb0f5 ("cam: Turn BufferWriter into a FrameSink")
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Acked-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The streamName_ map contains names for all streams, rename it to
streamNames_ to make this more explicit.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
libunwind has an API to provide symbolic names for functions. It's less
optimal than using backtrace_symbols() or libdw, as it doesn't allow
deferring the symbolic names lookup, but it can be usefull as a fallback
if no other option is available.
A sample backtrace when falling back to libunwind looks like
libcamera::VimcCameraData::init()+0xbd
libcamera::PipelineHandlerVimc::match(libcamera::DeviceEnumerator*)+0x3e0
libcamera::CameraManager::Private::createPipelineHandlers()+0x1a7
libcamera::CameraManager::Private::init()+0x98
libcamera::CameraManager::Private::run()+0x9f
libcamera::Thread::startThread()+0xee
decltype(*(std::__1::forward<libcamera::Thread*>(fp0)).*fp()) std::__1::__invoke<void (libcamera::Thread::*)(), libcamera::Thread*, void>(void (libcamera::Thread::*&&)(), libcamera::Thread*&&)+0x77
void std::__1::__thread_execute<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void (libcamera::Thread::*)(), libcamera::Thread*, 2ul>(std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void (libcamera::Thread::*)(), libcamera::Thread*>&, std::__1::__tuple_indices<2ul>)+0x3e
void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void (libcamera::Thread::*)(), libcamera::Thread*> >(void*)+0x62
start_thread+0xde
???
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
libunwind is an alternative to glibc's backtrace() to extract a
backtrace. Use it when available to extend backtrace support to more
platforms.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
libdw provides access to debugging information. This allows creating
better stack trace entries, with file names and line numbers, but also
with demangled symbols as the symbol name is available and can be passed
to abi::__cxa_demangle().
With libdw, the backtrace previously generated by backtrace_symbols()
src/cam/../libcamera/libcamera.so(_ZN9libcamera14VimcCameraData4initEv+0xbd) [0x7f7dbb73222d]
src/cam/../libcamera/libcamera.so(_ZN9libcamera19PipelineHandlerVimc5matchEPNS_16DeviceEnumeratorE+0x3e0) [0x7f7dbb731c40]
src/cam/../libcamera/libcamera.so(_ZN9libcamera13CameraManager7Private22createPipelineHandlersEv+0x1a7) [0x7f7dbb5ea027]
src/cam/../libcamera/libcamera.so(_ZN9libcamera13CameraManager7Private4initEv+0x98) [0x7f7dbb5e9dc8]
src/cam/../libcamera/libcamera.so(_ZN9libcamera13CameraManager7Private3runEv+0x9f) [0x7f7dbb5e9c5f]
src/cam/../libcamera/base/libcamera-base.so(_ZN9libcamera6Thread11startThreadEv+0xee) [0x7f7dbb3e95be]
src/cam/../libcamera/base/libcamera-base.so(+0x4f9d7) [0x7f7dbb3ec9d7]
src/cam/../libcamera/base/libcamera-base.so(+0x4f90e) [0x7f7dbb3ec90e]
src/cam/../libcamera/base/libcamera-base.so(+0x4f2c2) [0x7f7dbb3ec2c2]
/lib64/libpthread.so.0(+0x7e8e) [0x7f7dbab65e8e]
/lib64/libc.so.6(clone+0x3f) [0x7f7dbb10b26f]
becomes
libcamera::VimcCameraData::init()+0xbd (src/libcamera/libcamera.so [0x00007f9de605b22d])
libcamera::PipelineHandlerVimc::match(libcamera::DeviceEnumerator*)+0x3e0 (src/libcamera/libcamera.so [0x00007f9de605ac40])
libcamera::CameraManager::Private::createPipelineHandlers()+0x1a7 (src/libcamera/libcamera.so [0x00007f9de5f13027])
libcamera::CameraManager::Private::init()+0x98 (src/libcamera/libcamera.so [0x00007f9de5f12dc8])
libcamera::CameraManager::Private::run()+0x9f (src/libcamera/libcamera.so [0x00007f9de5f12c5f])
libcamera::Thread::startThread()+0xee (src/libcamera/base/libcamera-base.so [0x00007f9de5d125be])
decltype(*(std::__1::forward<libcamera::Thread*>(fp0)).*fp()) std::__1::__invoke<void (libcamera::Thread::*)(), libcamera::Thread*, void>(void (libcamera::Thread::*&&)(), libcamera::Thread*&&)+0x77 (src/libcamera/base/libcamera-base.so [0x00007f9de5d159d7])
void std::__1::__thread_execute<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void (libcamera::Thread::*)(), libcamera::Thread*, 2ul>(std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void (libcamera::Thread::*)(), libcamera::Thread*>&, std::__1::__tuple_indices<2ul>)+0x3e (src/libcamera/base/libcamera-base.so [0x00007f9de5d1590e])
void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void (libcamera::Thread::*)(), libcamera::Thread*> >(void*)+0x62 (src/libcamera/base/libcamera-base.so [0x00007f9de5d152c2])
start_thread+0xde (/var/tmp/portage/sys-libs/glibc-2.33-r1/work/glibc-2.33/nptl/pthread_create.c:482)
__clone+0x3f (../sysdeps/unix/sysv/linux/x86_64/clone.S:97)
The stack entries related to libcamera are missing source file name and
line information, which will be investigated separately, but this is
still an improvement.
Use libdw when available, falling back to backtrace_symbols() otherwise,
or if libdw fails for any reason.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Create a new class to abstract generation and access to call stack
backtraces. The current implementation depends on the glibc backtrace()
implementation and is copied from the logger. Future development will
bring support for libunwind, transparently for the users of the class.
The logger backtrace implementation is dropped, replaced by usage of the
new Backtrace class.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The Size class has new helpers that can simplify the code in the IPU3
pipeline handler. Use them.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
Add four new member functions to the Size class (two in-place and two
const) to grow and shrink a Size by given margins.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
When building the documentaion with Doxygen, an error will be
reported if latex can not find the extra packages provided by
texlive-latex-extra.
While this package is optional, document this requirement for
Documentation.
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The libcamera Android Camera HAL generates camera configurations for the
StillCapture, Raw and ViewFinder stream roles. But there is only a check
if the configuration generation failed, for the StillCapture stream role.
This could lead to a NULL pointer dereference if a pipeline handler fails
to generate a default configuration for one of the other two stream roles.
Signed-off-by: Javier Martinez Canillas <javierm@redhat.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Hirokazu Honda <hiroh@chromium.org>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Problems with libcamerasrc are reported quite frequently. To help users
diagnosing them, document how to obtain libcamerasrc debug messages
through GST_DEBUG.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The problem is happening because we seem to add a CameraStream
associated buffer (depending on the CameraStream::Type) to the Request,
in CameraDevice::processCaptureRequest().
However, when the camera stops, all the current buffers are marked with
FrameMetadata::FrameCancelled and proceed to completion. But the buffer
associated with the CameraStream (that was previously added to the
request) has now been cleared out with a part of streams_.clear(), even
before the camera stop() has been invoked. Any access to those request
buffers after they have been cleared, will result in a crash.
Signed-off-by: Hirokazu Honda <hiroh@chromium.org>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Creating a CameraBuffer instance doesn't map memory. Fix the error
message accordingly.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Hirokazu Honda <hiroh@chromium.org>
|
|
When the pipeline handler start() method is supplied with a NULL list
of controls, we send an empty control list to the IPA. When the IPA is
running in isolated mode the control list goes through the data
serializer, for which it must be marked correctly as a list of
"controls::controls", otherwise the IPA process will abort.
The IPA has a similar problem returning a control list in its
configure() method. We must be careful to initialise it properly even
when empty.
Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
libevent-dev and libgtest-dev are required for lc-compliance.
Write them into lc-compliance dependencies in README.rst.
Signed-off-by: Hirokazu Honda <hiroh@chromium.org>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The intel-ipu3.h public interface from the kernel does not define how to
parse the statistics for a cell. This had to be identified by a process
of reverse engineering, and later identifying the structures from [0]
leading to our custom definition of struct Ipu3AwbCell.
[0]
https://chromium.googlesource.com/chromiumos/platform/arc-camera/+/refs/heads/master/hal/intel/include/ia_imaging/awb_public.h
To improve the kernel interface, a proposal has been made to the
linux-kernel [1] to incorporate the memory layout for each cell into the
intel-ipu3 header directly.
[1]
https://lore.kernel.org/linux-media/20211005202019.253353-1-jeanmichel.hautbois@ideasonboard.com/
Update our local copy of the intel-ipu3.h to match the proposal and
change the AGC and AWB algorithms to reference that structure directly,
allowing us to remove the deprecated custom Ipu3AwbCell definition.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Now that we know how the AWB statistics are formatted, use a simplified
loop in processBrightness() to parse the green values and get the
histogram.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The pixels output by the camera normally include a black level, because
sensors do not always report a signal level of '0' for black. Pixels at
or below this level should be considered black and to achieve that, we
need to substract an offset to all the pixels. This can be taken into
account by reading the lowest value of a special region on sensors which
is not exposed to light. This provides a substracting factor to be
able to adjust the expected black levels in the resulting images.
For a camera outputting 10-bit pixel values (in the range 0 to 1023) a
typical black level might be 64. It is a fixed value, obtained by
capturing a raw frame with minimum exposure and gain fixed to 1.0 while
covering the sensor (the darker the better). We consider it good enough
as a very first approximation, until we measure it during a tuning
process and include it in a configuration file
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The statistics buffer 'ipu3_uapi_awb_raw_buffer' stores the ImgU
calculation results in a buffer aligned horizontally to a multiple of 4
cells. The AWB loop should take care of it to add the proper offset
between lines and avoid any staircase effect.
It is no longer required to pass the grid configuration context to the
private functions called from process() which simplifies the code flow.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The gains have a precision u3.13, range [0, 8[ which means that a gain
multiplier value of 1.0 is represented as a multiplication by 8192 in
the ImgU. Correct the gains as this was misunderstood in the first
place.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The algorithm uses the statistics of a cell only if there is not too
much saturated pixels in it. The grey world algorithm works fine when
there are a limited number of outliers.
Consider a valid zone to be at least 80% of unsaturated cells in it.
This value could very well be configurable, and make the algorithm more
or less tolerant.
While at it, implement it in a configure() call as it will not change
during execution, and cache the cellsPerZone values estimated with
std::round as we are using cmath.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The loops over the width and height of the image when calculating the
BDS grid parameters are nested, but they're actually independent. Split
them to reduce the complexity.
While at it, split out the constants to documented const expressions
for the grid sizes.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Until now, the limits used to calculate the grid based on the Bayer Down
Scaler configuration where taken from the kernel documentation [0].
While testing and understanding the format of the ImgU statistics, it
appears that the ones defined in CrOS [1] are the correct ones. Use
those.
[0] https://www.kernel.org/doc/html/latest/userspace-api/media/v4l/pixfmt-meta-intel-ipu3.html?highlight=v4l2_meta_fmt_ipu3_params#intel-ipu3-imgu-uapi-data-types
[1] https://chromium.googlesource.com/chromiumos/platform/arc-camera/+/refs/heads/master/hal/intel/include/ia_imaging/awb_public.h
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The variables mix the terms cell, region and zone. It can confuse the
reader, and make the algorithm more difficult to follow. Rename the
local variables to be consistent with their definitions:
- Cells are defined in Pixels
- Zones are defined in Cells
There is no "region" as such, so replace it with the correct term.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The pixel component sums for the Accumulator are inconsistent with other
similar structures such as the IPAFrameContext::awb::gains. Group the
red, green, and blue sums together in a struct and store them as
uint64_t to reduce potential architectural differences.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The IspStatsRegion structure was introduced as an attempt to prepare for
a generic AWB algorithm structure. The structure name by itself is not
explicit and it is too optimistic to try and make a generic one for now.
Its role is to accumulate the pixels in a given zone. Rename it to
accumulator, and remove the uncounted field at the same time. It is
always possible to know how many pixels are not relevant for the
algorithm by calculating total-counted. The uncounted field was only
declared and not used. Amend the documentation accordingly.
Signed-off-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|