Age | Commit message (Collapse) | Author |
|
In preparation for the addition of NV12 support in the SDL sink, rename
the sdl_texture_yuyv.{cpp,h} files to just "yuv". Separate
sdl_texture_nv12.{cpp,h} files could be added instead, but given how
short the implementation will be, grouping all YUV formats in a single
file is better.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Eric Curtin <ecurtin@redhat.com>
Tested-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The line stride of the texture is currently hardcoded based on the image
width, which may not match the real stride. Use the stride value from
the stream configuration instead.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Eric Curtin <ecurtin@redhat.com>
Tested-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
One typo is corrected, and this format is added to one further table
where it was missing entirely.
Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The python3-yaml package (containing the PyYAML Python package) shipped
by Debian stable is documented as a YAML 1.1 parser:
Python3-yaml is a complete YAML 1.1 parser and emitter for Python3.
PyYAML doesn't implement YAML 1.2 support, but ignores the minor number
of the YAML directive, and thus doesn't choke on the libcamera internal
files used to generate format- and control-related source code that
explicitly state conformance with YAML 1.2. Still, given that we don't
use any feature of YAML 1.2, and that the tuning data files now use YAML
1.1, switch the internal YAML files to version 1.1 as well for
consistency.
The main drawback of YAML 1.1 is that the unquoted literal strings Yes,
No, On and Off will be parsed as booleans. We need to be careful to
avoid those values in YAML files, until libcamera can switch to YAML 1.2
once more recent versions of libyaml get shipped by the distributions we
want to support. This is however not an issue introduced by this change,
as the existing YAML 1.2 files were parsed with the YAML 1.1 string
literal parsing rules anyway.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Florian Sylvestre <fsylvestre@baylibre.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
YAML 1.2 support has been added in libyaml in version 0.2.3. Debian
stable ships version 0.2.2 of libyaml, which causes parse errors of the
tuning tuning data files due to the YAML 1.2 directive at the beginning.
As we don't use any feature of YAML 1.2, downgrade the data files to
YAML 1.1.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Tested-by: Florian Sylvestre <fsylvestre@baylibre.com>
Reviewed-by: Florian Sylvestre <fsylvestre@baylibre.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Currently the pipeline handler advertises controls handled by the IPA
from a ControlInfoMap it manually constructs. This is wrong, as the IPA
module is the component that knows what controls it supports. Fix this
by moving the ControlInfoMap construction to the IPA module, and pass it
to the pipeline handler as a return value from the IPA init() function.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Florian Sylvestre <fsylvestre@baylibre.com>
|
|
Several NXP i.MX8 SoCs (such as the i.MX8MN and i.MX8MP) contain a
camera pipeline made of sensor interfaces (with parallel and/or CSI-2
receivers) and an image processing engine named ISI. The ISI contains an
input crossbar switch and one or more processing pipelines capable of
format conversion and scaling.
This is a good candidate for the simple pipeline handler as a first
step. The i.MX8MP should eventually graduate to having its own pipeline
handler as it also contains two ISP instances (supported by the rkisp1
driver).
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
Tested-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
If a subdev supports the internal routing API, pads unrelated to the
pipeline for a given camera sensor may carry streams for other cameras.
The link setup logic is updated to take this into account, by avoiding
disabling links to unrelated pads.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Tested-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
When traversing the media graph to discover a pipeline from the camera
sensor to a video node, all sink-to-source paths inside subdevs are
considered. This can lead to invalid paths being followed, when a subdev
has restrictions on its internal routing.
The V4L2 API supports exposing subdev internal routing to userspace.
Make use of this feature, when implemented by a subdev, to restrict the
internal paths to the currently active routes. If a subdev doesn't
implement the internal routing operations, all source pads are
considered, as done today.
This change is needed to properly support multiple sensors with devices
such as the NXP i.MX8 ISI or the MediaTek i350 and i500 SENINF. Support
for changing routes dynamically will be added later when required.
Signed-off-by: Phi-Bang Nguyen <pnguyen@baylibre.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
Tested-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
To setup links the pipeline handler iterates over all entities in the
pipeline and disables all links but the ones we want to enable. Some
entities don't allow multiple sink links to be enabled, so the iteration
needs to be based on sink entities, disabling all their links first and
then enabling the sink link that is part of the pipeline.
The loop implementation iterates over all Entity instances, and uses
their source link to locate the MediaEntity for the connected sink. The
sink entity is then processed in the context of the source's loop
iteration. This prevents the code from being able to access extra
information about the sink entity, as we only have access to the
MediaEntity, not the Entity.
To prepare for subdev routing support that will require accessing
additional entity information, refactor the loop to process the sink
entity in the context of its Entity instance.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
Tested-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
Reset the routing table of subdevices supporting the V4L2 streams API to
its default state when initializing the pipeline handler. This avoids
issues caused by usage of leftover state.
Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Tomi Valkeinen <tomi.valkeinen@ideasonboard.com>
Tested-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
Extend the V4L2Subdevice class to support getting and setting routing
tables.
Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Tomi Valkeinen <tomi.valkeinen@ideasonboard.com>
Tested-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
Collect subdev capabilities at open() time.
Model the V4L2SubdevCapabilties as V4L2Capability from the video
device class.
Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Tested-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The V4L2Subdevice::Whence enumerations defines two values that should
correspond to the V4L2_SUBDEV_FORMAT_ACTIVE and V4L2_SUBDEV_FORMAT_TRY
definitions.
The V4L2 symbols are defined as:
V4L2_SUBDEV_FORMAT_TRY = 0,
V4L2_SUBDEV_FORMAT_ACTIVE = 1,
While the libcamera defined enumeration is:
enum Whence {
ActiveFormat,
TryFormat,
}
As V4L2Subdevice::Whence values are used to populate data types
defined in v4l2-subdev.h used as arguments to VIDIOC_SUBDEV_*
ioctls, the V4L2Subdevice class is required to adjust their value as:
subdevFmt.which = whence == ActiveFormat ? V4L2_SUBDEV_FORMAT_ACTIVE
: V4L2_SUBDEV_FORMAT_TRY;
Drop the adjustment by defining :
Whence::TryFormat = V4L2_SUBDEV_FORMAT_TRY;
Whence::ActiveFormat = V4L2_SUBDEV_FORMAT_ACTIVE;
Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Tested-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The V4L2 subdev internal routing API is under development. Add it
manually to the v4l2-subdev.h kernel header for now.
The code corresponds to the "[PATCH v11 00/36] v4l: routing and streams
support" series as posted to the linux-media mailing list.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Tomi Valkeinen <tomi.valkeinen@ideasonboard.com>
Tested-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
Add missing 32-bit packet YUV 4:4:4 formats. These formats are used by
the i.MX8 ISI driver.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
Tested-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
These YUV 4:4:4 packed formats are used by the ISI driver. Add them to
videodev2.h.
This is merged in the upstream kernel as 00f6842ef41d ("media: v4l: Add
packed YUV 4:4:4 YUVA and YUVX pixel formats").
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Tested-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
Add FourCCs for the YUV 4:4:4 packed formats AVUY8888 and XVUY8888.
This is merged in the upstream kernel as 53618649ca6d ("drm/fourcc: Add
formats for packed YUV 4:4:4 AVUY and XVUY permutations").
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Tested-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
Update kernel headers to v5.19 using utils/update-kernel-headers.sh and
re-instating libcamera local modifications.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
Tested-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
The YamlParser::getList<>() function returns an std::optional<> to allow
callers to identify cases where parsing the .yaml file failed from cases
where the parsed list is just empty.
The current implementation returns a default constructed std::optional
in case of errors with
return {};
The returned value is thus equal to std::nullopt, but the code can be
easily misinterpreted as returning an empty vector by a reader.
Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The V4L2_PIX_FMT_JPEG and V4L2_PIX_FMT_MJPEG formats are under-specified
and are used interchangeably by kernel drivers.
Map both of them to formats::MJPEG and use the newly re-introduced
V4L2VideoDevice::toV4L2PixelFormat() to map to the one actually used by
the video device.
Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Tested-by: Paul Elder <paul.elder@ideasonboard.com>
|
|
Now that V4L2PixelFormat::fromPixelFormat() returns a list of formats
to chose from, select the one supported by the video device by matching
against the list of supported pixel formats.
The first format found to match one of the device supported ones is
returned.
As the list of pixel formats supported by the video device does not
change at run-time, cache it at device open() time. Maximize the
lookup efficiency by storing the list of supported V4L2PixelFormat in an
std::unordered_set<>.
Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Tested-by: Paul Elder <paul.elder@ideasonboard.com>
|
|
Inject a specialization of std::hash<> for the V4L2PixelFormat class in
the std namespace to make it possible to store instances of the class in
the std::unordered_map and std::unordered_set containers.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
Tested-by: Paul Elder <paul.elder@ideasonboard.com>
|
|
Multiple V4L2 formats can be associated with a single PixelFormat.
Now that users of V4L2PixelFormat::fromPixelFormat() have been converted
to use V4L2VideoDevice::toV4L2PixelFormat(), return the full list of
V4L2 formats in order to prepare to match them against the ones
supported by the video device.
The V4L2 compatibility layer, not having any video device to interact
with, is converted to use the first returned format unconditionally.
Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Tested-by: Paul Elder <paul.elder@ideasonboard.com>
|
|
This is a partial revert of commit 395d43d6d75b ("libcamera:
v4l2_videodevice: Drop toV4L2PixelFormat()")
The function was removed because it incorrectly maps non-contiguous V4L2
format variants (ie V4L2_PIX_FMT_YUV420M) to the API version supported
by the video device (singleplanar API and multiplanar API). It was
decided at the time to remove the function and let its users call
directly V4L2PixelFormat::fromPixelFormat() which accepts a
'multiplanar' flags.
As we aim to associate multiple V4L2PixelFormat to a single libcamera
format, the next patches will verify which of them is actually supported
by the video device. For now, return the contiguous version
unconditionally.
Re-introduce V4L2VideoDevice::toV4L2PixelFormat() and convert all
the V4L2PixelFormat::fromPixelFormat() users to use it.
The V4L2 compatibility layer is the only outlier as it doesn't have a
video device to poke, hence it still uses
V4L2PixelFormat::fromPixelFormat().
Next patches will implement the device format matching logic and handle
the non-contiguous plane issue in V4L2VideoDevice::toV4L2PixelFormat().
Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Tested-by: Paul Elder <paul.elder@ideasonboard.com>
|
|
Remove 4 rogue spaces from the pipeline developer guide.
Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Tested-by: Paul Elder <paul.elder@ideasonboard.com>
|
|
To each libcamera PixelFormat two V4L2 formats are associated, the
'single' and 'multi' format variants.
The two versions list plane contiguous and non-contiguous format
variants, and an optional argument to V4L2PixelFormat::fromPixelFormat()
was used to select which one to pick.
In order to prepare to remove V4L2PixelFormat::fromPixelFormat(), and
considering that no caller in the codebase uses the non-contiguous
format variant, merge the two formats vectors in a single one and
default the selection to the first available one, which is functionally
equivalent to what is currently implemented.
Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Tested-by: Paul Elder <paul.elder@ideasonboard.com>
|
|
The PixelFormatInfo::info(const V4L2PixelFormat &format) function
returns the PixelFormatInfo associated with a V4L2 pixel format.
As the pixelFormatInfo map is indexed using PixelFormat and a V4L2
format can easily be converted to a PixelFormat, simply return
the info() function overload instead of maintaining two similar
implementations of the same function.
Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Tested-by: Paul Elder <paul.elder@ideasonboard.com>
|
|
The DNG specification is based on the TIFF file format and recommends
storing the raw image data in a SubIFD and the Exif tags in an Exif IFD.
Other options are allowed, even if not recommended, such as storing both
the raw image data and the Exif data in IFD0, as done by the TIFF/EP
specification.
libcamera-apps use pyexiv2 to produce DNG files, following the DNG
recommendation, while applications based on picamera2 use PiDNG, which
adopts the TIFF/EP structure. Why it does so is not currently clear (see
https://github.com/schoolpost/PiDNG/issues/65 for discussions on this
topic), but as files based on the DNG and TIFF/EP variants exist in the
wild, both need to be supported by ctt.
Add code to identify which tags are being used, and then load the
metadata from the correct tags.
Signed-off-by: William Vinnicombe <william.vinnicombe@raspberrypi.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The ctt would not work if only passed alsc images.
Add alsc_only.py to run alsc calibration only, and modify check_imgs
to allow for no macbeth chart images.
Example usage would be ./alsc_only.py -i tuning-images/ -o sensor.json
with the same optional arguments as the original ctt.
Signed-off-by: William Vinnicombe <william.vinnicombe@raspberrypi.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
When we switch camera mode following a pipeline reconfiguration, the
embedded data parser should be "reset" to discard any data that it may
have cached and that might now be invalid.
Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
When the YamlObject::get() function override that returns a
std::optional got introduced, all tests were moved to it, leaving no
tests for the override that takes a default value. Reintroduce those
tests.
Reported-by: Florian Sylvestre <fsylvestre@baylibre.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Building on gcc8 on Debian 10 fails with
asyncThread_ = std::thread(std::bind(&Awb::asyncFunc, this));
../src/ipa/raspberrypi/controller/rpi/awb.cpp:177:34: note: ‘std::bind’
is defined in header ‘<functional>’; did you forget to ‘#include
<functional>’?
Fix that by including <functional> in awb.cpp.
Fixes: c1597f989654 ("ipa: raspberrypi: Use YamlParser to replace dependency on boost")
Reported-by: https://buildbot.libcamera.org/#/builders/6/builds/414
Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
|
|
The Pwl class uses std::function, defined in <functional>, without
including the header directly. This results in a compilation error with
the Fedora 36 compiler toolchain (gcc 12.1.1):
In file included from ../src/ipa/raspberrypi/controller/rpi/awb.h:14,
from ../src/ipa/raspberrypi/controller/rpi/awb.cpp:14:
../src/ipa/raspberrypi/controller/rpi/../pwl.h:97:18: error: 'std::function' has not been declared
97 | void map(std::function<void(double x, double y)> f) const;
| ^~~
Fix it by including <functional>.
Fixes: c1597f989654 ("ipa: raspberrypi: Use YamlParser to replace dependency on boost")
Signed-off-by: Eric Curtin <ecurtin@redhat.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Usage of the std::enable_if_t type doesn't need to be prefixed by
typename. Drop the unnecessary keyword.
Reported-by: Jacopo Mondi <jacopo@jmondi.org>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
The Controller(char const *jsonFilename) is not valid anymore since
Controller::read() can now return an error on a failure. Additionally, this
constructor is not actually used, so remove it.
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Add ColorProcessing algorithm that is in charge to manage brightness,
contrast and saturation controls. These controls are currently based on
user controls.
Signed-off-by: Florian Sylvestre <fsylvestre@baylibre.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Denoise and Sharpness filters will be applied by RkISP1 during the
demosaicing step. The denoise filter is responsible for removing noise
from the image, while the sharpness filter will enhance its acutance.
Add filter algorithm with denoise and sharpness values based on user
controls.
Signed-off-by: Florian Sylvestre <fsylvestre@baylibre.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The Defect Pixel Cluster Correction algorithm is responsible to minimize
the impact of defective pixels. The on-the-fly method is actually used,
based on coefficient provided by the tuning file.
Signed-off-by: Florian Sylvestre <fsylvestre@baylibre.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The Lens Shading Correction algorithm applies multipliers to all pixels
to compensate for the lens shading effect. The coefficients are
specified in a downscaled table in the YAML tuning file.
Signed-off-by: Florian Sylvestre <fsylvestre@baylibre.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The GammaSensorLinearization algorithm linearizes the sensor output to
compensate the sensor non-linearities by applying piecewise linear
functions to the red, green and blue channels.
Signed-off-by: Florian Sylvestre <fsylvestre@baylibre.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
To improve the kernel interface, a proposal has been made to the
linux-kernel [1] to improve the configuration of the Defective Pixel
Cluster Correction (DPCC).
[1] https://lore.kernel.org/linux-media/20220616160456.21549-1-laurent.pinchart@ideasonboard.com/
Update the local copy of the rkisp1-config.h to match the proposal.
Signed-off-by: Florian Sylvestre <fsylvestre@baylibre.com>
Acked-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
usage
The std::optional<T>::value_or(U &&default_value) function returns the
contained value if available, or default_value if the std::optional has
no value. If the desired default value is a default-constructed T, the
obvious option is to call std::optional<T>::value_or(T{}). This approach
has two drawbacks:
- The \a default_value T{} is constructed even if the std::optional
instance has a value, which impacts efficiency.
- The T{} default constructor needs to be spelled out explicitly in the
value_or() call, leading to long lines if the type is complex.
Introduce a defopt variable that solves these issues by providing a
value that can be passed to std::optional<T>::value_or() and get
implicitly converted to a default-constructed T.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
The SDLTexture::update() function isn't meant to modify the data it
receives. Make the Span type const to ensure this at compile time. While
at it, pass the Span by value instead of reference, as a Span is only a
pointer and size, which will fit in registers and will avoid pointer
dereferences in the callee.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
We were using the libjpeg functionality of SDL2_image only, instead just
use libjpeg directly to reduce our dependancy count, it is a more
commonly available library.
Signed-off-by: Eric Curtin <ecurtin@redhat.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
Replace the manual implementation of the readList() functions with
YamlObject::getList().
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Tested-by: Naushir Patuck <naush@raspberrypi.com>
|
|
Allow to retrieve a YAML list of any already supported types in a
std::vector.
Signed-off-by: Florian Sylvestre <fsylvestre@baylibre.com>
Tested-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Use the convert_tuning.py script to convert all existing tuning files to
version 2.0.
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Tested-by: Naushir Patuck <naush@raspberrypi.com>
|
|
Add a script to convert the Raspberry Pi camera tuning file format from version
1.0 to 2.0. This script also adds a root level version key set to 2.0 to the
config file, allowing the controller to distinguish between the two formats.
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Tested-by: Naushir Patuck <naush@raspberrypi.com>
|
|
Update the ctt_pretty_print_json.py script to generate the new version 2.0
format camera tuning file. This script can be called through the command line
to prettify an existing JSON file, or programatically by the CTT to format a
new JSON config dictionary.
Update the CTT to produce a version 2.0 format json structure and use
ctt_pretty_print_json.pretty_print to prettify the output.
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Tested-by: Naushir Patuck <naush@raspberrypi.com>
|