Age | Commit message (Collapse) | Author |
|
The new Image class represents a multi-planar image with direct access
to pixel data. It currently duplicates the function of the
MappedFrameBuffer class which is internal to libcamera, and will serve
as a design playground to improve the API until it is considered ready
to be made part of the libcamera public API.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Hirokazu Honda <hiroh@chromium.org>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The JPEG post-processor uses MappedFrameBuffer to access pixel data, but
only uses data from the first plane. Pass the vector of planes to the
encode() function to correctly handle multi-planar formats (currently
limited to NV12).
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Acked-by: Umang Jain <umang.jain@ideasonboard.com>
Tested-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Hirokazu Honda <hiroh@chromium.org>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
When calculating the luma line address, the image width is used instead
of the stride. Without padding at the end of the line the the values
should be identical, but this is conceptually incorrect in any case. Fix
it.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
|
|
Now that libcamera correctly supports frame buffers with different
dmabuf for each plane, remove the assumption that a single dmabuf is
used.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The number of metadata planes should always match the number of frame
buffer planes. Enforce this by making the vector private and providing
accessor functions.
As this changes the public API, update all in-tree users accordingly.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Hirokazu Honda <hiroh@chromium.org>
|
|
The metadata planes are allocated by V4L2VideoDevice when dequeuing a
buffer. This causes the metadata planes to only be allocated after a
buffer gets dequeued, and doesn't provide any strong guarantee that
their number matches the number of FrameBuffer planes. The lack of this
invariant makes the FrameBuffer class fragile.
As a first step towards fixing this, allocate the metadata planes when
the FrameBuffer is constructed. The FrameMetadata API should be further
improved by preventing a change in the number of planes.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Hirokazu Honda <hiroh@chromium.org>
|
|
Replace a manual counter with the utils::enumerate() utility function.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Hirokazu Honda <hiroh@chromium.org>
|
|
When dequeueing a buffer from a V4L2VideoDevice, the number of planes in
the FrameBuffer may not match the number of V4L2 buffer planes if the
PixelFormat is multi-planar (has multiple colour planes) and the V4L2
format is single-planar (has a single buffer plane). In this case, we
need to split the single V4L2 buffer plane into FrameBuffer planes. Do
so, and add checks to reject invalid V4L2 buffers in case of a driver
issue.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Hirokazu Honda <hiroh@chromium.org>
|
|
When queueing a buffer to a V4L2VideoDevice, the number of planes in the
FrameBuffer may not match the number of V4L2 buffer planes if the
PixelFormat is multi-planar (has multiple colour planes) and the V4L2
format is single-planar (has a single buffer plane). In this case, we
need to coalesce all FrameBuffer planes into a single V4L2 buffer plane.
Do so, and add validity checks to reject frame buffers that can't be
described using a single V4L2 buffer plane.
This change prepares for proper multi-planar support, but isn't expected
to result in a change of behaviour with existing pipeline handlers, as
none of them queue an output buffer with multiple FrameBuffer planes or
use non-contiguous buffers for either capture or output.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Hirokazu Honda <hiroh@chromium.org>
|
|
When creating FrameBuffer instances, the V4L2VideoDevice computes plane
offsets using minimal stride for the format. This doesn't always produce
a valid result when the device requires padding at the end of lines. Fix
it by computing offsets using the stride reported by V4L2.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Hirokazu Honda <hiroh@chromium.org>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The V4L2VideoDevice::createBuffer() calculates offsets manually when
using a multi-planar pixel format and a single-planar V4L2 format. The
process isn't trivial, document it.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Hirokazu Honda <hiroh@chromium.org>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
Cache the PixelFormatInfo instead of looking it up in every call to
createBuffer(). This prepares for usage of the info in queueBuffer(), to
avoid a looking every time a buffer is queued.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Hirokazu Honda <hiroh@chromium.org>
|
|
Multi-planar frame buffers can store their planes contiguously in
memory, or split them in discontiguous memory areas. Add a private
function to check in which of these two categories the frame buffer
belongs. This will be used to correctly handle the differences between
the V4L2 single and multi planar APIs.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Hirokazu Honda <hiroh@chromium.org>
|
|
The FrameBuffer::planes() function checks that planes are correctly
initialized with an offset. This can be done at construction time
instead, as the planes are constant. The backtrace generated by the
assertion will show where the faulty frame buffer is created instead of
where it is used, easing debugging.
As the runtime overhead is reduced, there's no real need to drop the
assertion in the future anymore, it can be useful to ensure that the
planes are correctly populated by the caller. Drop the comment that
calls for removing the check.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Hirokazu Honda <hiroh@chromium.org>
|
|
V4L2 describes multi-planar formats with different 4CCs depending on
whether or not the planes are stored contiguously in memory. Support
this when translating between PixelFormat and V4L2PixelFormat.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Hirokazu Honda <hiroh@chromium.org>
Reviewed-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
|
|
Add two helpers functions to the PixelFormatInfo class to compute the
byte size of a given plane, taking the frame size, the stride, the
alignment constraints and the vertical subsampling into account.
Use the new functions through the code base to replace manual
implementations.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Hirokazu Honda <hiroh@chromium.org>
|
|
Move the PixelFormatPlaneInfo structure within the PixelFormatInfo class
definition and rename it to Plane, to align the naming scheme with other
parts of libcamera, such as FrameBuffer::Plane or FrameMetadata::Plane.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Hirokazu Honda <hiroh@chromium.org>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
|
|
Replace manual searches for V4L2 pixel format in the PixelFormatInfo
with the V4L2PixelFormat::fromPixelFormat() helper function. This
prepares for multi-planar support that will modify how V4L2 pixel
formats are stored in PixelFormatInfo.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Hirokazu Honda <hiroh@chromium.org>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
|
|
The V4L2VideoDevice::toV4L2PixelFormat() function is incorrectly
implemented, as it will pick a multi-planar format if the device
supports the multi-planar API, even if only single-planar formats are
supported. This currently works because the implementation calls
V4L2PixelFormat::fromPixelFormat(), which ignores the multiplanar
argument and always returns a single-planar format.
Fixing this isn't trivial. As we don't need to support multi-planar V4L2
formats at this point, drop the function instead of pretending
everything is fine, and call V4L2PixelFormat::fromPixelFormat() directly
from pipeline handlers. As the single-planar case is the most common,
set the multiplanar argument to false by default to avoid long lines.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Hirokazu Honda <hiroh@chromium.org>
|
|
The inode is useful to check if two file descriptors refer to the same
file. Add a function to retrieve it.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Hirokazu Honda <hiroh@chromium.org>
|
|
Add camera sensor properties for the Hynix hi846 sensor. The part is
also called YACG4D0C9SHC and a datasheet can be found at
https://product.skhynix.com/products/cis/cis.go
This is the selfie camera in the Librem 5 phone.
Signed-off-by: Martin Kepplinger <martin.kepplinger@puri.sm>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Usually .cpp files are equipped with using namespace libcamera;
Hence, it is unnecessary mentioning the explicit namespace of
libcamera at certain places.
While at it, a small typo in a comment was noticed and fixed as
part of this patch.
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Hirokazu Honda <hiroh@chromium.org>
|
|
Replace the open-coded PixelFormat lookup with the
V4L2PixelFormat::toPixelFormat() helper function. This simplifies the
implementation.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasoboard.com>
|
|
Many signals used in internal and public APIs carry the emitter pointer
as a signal argument. This was done to allow slots connected to multiple
signal instances to differentiate between emitters. While starting from
a good intention of facilitating the implementation of slots, it turned
out to be a bad API design as the signal isn't meant to know what it
will be connected to, and thus shouldn't carry parameters that are
solely meant to support a use case specific to the connected slot.
These pointers turn out to be unused in all slots but one. In the only
case where it is needed, it can be obtained by wrapping the slot in a
lambda function when connecting the signal. Do so, and drop the emitter
pointer from all signals.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
|
|
In many cases, the emitter object passed as a pointer from signals to
slots is also available as a class member. Use the class member when
this occurs, to prepare for removal of the emitter object pointer from
signals.
In test/event.cpp, this additionally requires moving the EventNotifier
to a class member.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
|
|
It can be useful to connect a signal to a functor, and in particular a
lambda function, while still operating in the context of a receiver
object (to support both object-based disconnection and queued
connections to Object instances).
Add a BoundMethodFunctor class to bind a functor, and a corresponding
Signal::connect() function. There is no corresponding disconnect()
function, as a lambda passed to connect() can't be later passed to
disconnect(). Disconnection typically uses disconnect(T *object), which
will cover the vast majority of use cases.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
|
|
When disconnecting a signal from a receiver, it is usually not necessary
to specify the receiver's slot function explicitly, as the signal is
often connected to a single slot for a given receiver. We can thus use a
simpler version of Signal::disconnect() that takes a pointer to the
receiver object only. This reduces code size, as the disconnect()
function is a template function.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
|
|
libcamera isn't supposed to log messages after the logger is destroyed,
as the global logger instance is destroyed after the main() function
returns, and the camera manager is supposed to have been stopped and
destroyed before that.
This rule is difficult to enforce in the V4L2 compat implementation, as
there is no location where we can destroy the camera manager manually
before the logger is destroyed. This results in a use-after-free
condition when the camera manager gets stopped during destruction.
Fix it by not trying to print log messages when the global logger
instance has been destroyed.
This is a bit of a hack, but hopefully not too bad. There could be race
conditions when using a CameraManager instance that is destroyed as part
of the destruction of global variables (like the V4L2 compat layer does,
it wraps CameraManager in a singleton V4L2CompatManager class, and
destroys it when V4L2CompatManager is destroyed) as the CameraManager
thread will still be running when the logger gets destroyed, but this
doesn't cause any regression as we destroy the logger without any
safeguard measure today anyway.
There are other options that could be considered. Forcing destruction of
the logger after the camera manager in the V4L2 compat layer is one of
them, but turned out to be difficult. For instance care would need to be
taken *not* to log any message in the mmap() wrapper if the fd doesn't
match a wrapped camera, as mmap() is called very early in the
initialization process, before libcamera and the logger get initialized.
The resulting implementation would likely be fairly complex.
Another option could be to wrap the logger with a shared pointer, and
keep a reference to it in CameraManager. That's more intrusive, and it's
not clear if it would be worth it.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
|
|
The SimplePipelineHandler activeCamera_ member pointer is set but never
used. Drop it.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Tested-by: Martin Kepplinger <martin.kepplinger@puri.sm>
|
|
To use multiple cameras at the same time, a per-camera buffer ready
handler is needed. Move the bufferReady() function connected to the
V4L2VideoDevice bufferReady signal from the SimplePipelineHandler class
to the SimpleCameraData class.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Tested-by: Martin Kepplinger <martin.kepplinger@puri.sm>
|
|
To use multiple cameras at the same time, each camera instance will need
its own converter. Store the converter in SimpleCameraData, and open it
in init().
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Tested-by: Martin Kepplinger <martin.kepplinger@puri.sm>
|
|
The cameras created by the same pipeline handler instance may share
hardware resources, prohibiting usage of multiple cameras concurrently.
Implement a heuristic to reserve resources and handle mutual exclusiong
in a generic way.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Tested-by: Martin Kepplinger <martin.kepplinger@puri.sm>
|
|
Move opening of video devices at match() time, the same way as subdevs
are opened, to make the handling of V4L2 video devices and subdevices
more consistent.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Tested-by: Martin Kepplinger <martin.kepplinger@puri.sm>
|
|
Merge the SimplePipelineHandler videos_ and subdevs_ maps, which
respectively store V4L2 video devices and subdevices associated with
entities, into a single entities_ map that contains an EntityData
structure. This gathers all data about entities in a single place,
allowing for easy extension of entity data in the future.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Tested-by: Martin Kepplinger <martin.kepplinger@puri.sm>
|
|
Store the entity corresponding to the video node at the end of the
pipeline in the SimpleCameraData::entities_ list. This requires special
handling of the video node in the loops that iterate over all entities,
but will be useful to implement mutually exclusive access to entities
for concurrent camera usage.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Tested-by: Martin Kepplinger <martin.kepplinger@puri.sm>
|
|
The video device is currently opened in the SimpleCameraData
constructor. This requires opening the video devices on-demand the first
time they're accessed, which gets in the way of refactoring of
per-entity data storage in the pipeline handler.
Move opening of the video device to the SimpleCameraData::init()
function. The on-demand open mechanism isn't touched yet, it will be
refactored later.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Tested-by: Martin Kepplinger <martin.kepplinger@puri.sm>
|
|
Record the sink and source pads through which an entity is traversed in
the list of entities stored in the camera data. This prepares for
implementing mutually exclusive access to entities between cameras.
The debug message that displays detected pipelines now prints pads to
describe the pipeline more precisely:
[0:00:35.901275750] [260] DEBUG SimplePipeline simple.cpp:404 Found pipeline: [imx290 2-001a|0] -> [0|csis-32e40000.csi|1] -> [0|mxc_isi.0|1] -> [0|mxc_isi.0.capture]
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Tested-by: Martin Kepplinger <martin.kepplinger@puri.sm>
|
|
Add a new field to the MediaEntity class to identify the type of
interface it exposes to userspace. The MediaEntity constructor is
changed to take a media_v2_interface pointer instead of just the device
node major and minor to have access to the interface type.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Tested-by: Martin Kepplinger <martin.kepplinger@puri.sm>
|
|
"weighted", derived from the verb "to weight", comes from Middle English
weight, weiȝte, weght, wight, from Old English wiht, ġewiht, from
Proto-Germanic *wihtiz, from Proto-Indo-European *weǵʰ-. In none of
those does the t come before the h.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Jean-Michel Hautbois <jeanmichel.hautbois@ideasonboard.com>
|
|
MappedFrameBuffer::maps() returns planes_. This renames the function
name to planes().
Signed-off-by: Hirokazu Honda <hiroh@chromium.org>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The bufferLength_ member variabled is checked to have a positive value
before being used, to catch usage before the variable is set. The
variable is initialized to zero at construction time, which renders the
checks useless.
Fix this by initializing the variable to -1 at construction time.
Fixes: c5e2ed7806be ("android: camera_buffer: Map buffer in the first plane() call")
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Hirokazu Honda <hiroh@chromium.org>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
|
|
If the camera devices does not support the MANUAL_SENSOR capabilities
there is no point in generating a request template for the Manual
capture use case.
This change fixes CTS tests
android.hardware.camera2.cts.CameraDeviceTest#testCameraDeviceManualTemplate
android.hardware.camera2.cts.NativeCameraDeviceTest#testCameraDeviceCreateCaptureRequest
For devices that do not support MANUAL_SENSOR capabilities.
Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
Do not compare higher precision of the ratios, as it might lead to
absurd selection of sensor size for a relatively low requested
resolution size.
For example:
The imx258 driver supports the following sensor resolutions:
- 4208x3118 = 1.349583066
- 2104x1560 = 1.348717949
- 1048x780 = 1.343589744
It can be inferred that, that the aspect ratio only differs by a small
mantissa with each other. It does not makes sense to select a 4208x3118
for a requested size of say 640x480 or 1280x720, which is what is
happening currently.
($) cam -c1 -swidth=640,height=480,role=raw
- CIO2 configuration: 4208x3118-SGRBG10_IPU3 [*]
In order to address this constraint, only compare the ratio with single
precision to make a better decision on the sensor resolution policy
selection.
($) cam -c1 -srole=raw,width=640,height=480
- CIO2 configuration: 1048x780-SGRBG10_IPU3 [*]
[*] Please revert 0536a9aa7189("ipu3: Disallow raw only camera configuration")
if the configuration is reported as invalid.
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
Tested-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The current implementation of getSensorFormat() prioritizes sensor
sizes that match the output size FOV ratio.
Modify the frame size selection procedure to prioritize resolutions
with the same FOV as the sensor's native one, to avoid cropping in the
sensor pixel array.
For example:
- on a Soraka device equipped with ov13858 as back sensor, with a
native resolution of 4224x3136 (4:3), when requested to select the
sensor output size to produce 1080p (16:9) a frame size of 2112x1188
(16:9) is selected causing the ImgU configuration procedure to fail.
If a resolution with the same FOV as the sensor's native size, such
as 2112x1568 (4:3), is selected the pipeline works correctly.
Suggested-by:: Jacopo Mondi <jacopo@jmondi.org>
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
Tested-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
This prepares a base to introduce custom selection of sensor format
based on platform(Soraka or Nautilus) constraints. The changes in
selection policy will be introduced in a subsequent patch.
The function is copied from CameraSensor::getFormat() in the IPU3
pipeline handler code to be later changed on top.
The copy is not verbatim and has a minor change as follows:
CameraSensor::getFormats() has access to a V4L2Subdevice::Formats
internally and use it directly to iterate over supported camera sensor
frame sizes. The copy is adapted to use the CameraSensor::sizes(mbusCode)
instead, to enumerate over the supported frame sizes as per
mbusCodesToPixelFormat map.
No functional changes in this patch.
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
Tested-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
In CameraSensor, the mbusCodes() and sizes() accessor functions
retrieves all the supported media bus codes and the supported sizes
respectively. However, this is quite limiting since the caller
probably isn't in a position to match which range of sizes are
supported for a particular mbusCode.
Hence, the caller is most likely interested to know about the sizes
supported for a particular media bus code. This patch transforms the
existing CameraSensor::sizes() to CameraSensor::sizes(mbuscode) to
achieve that goal.
The patch also transforms existing CIO2Device::sizes() in IPU3 pipeline
handler to CIO2Device::sizes(PixelFormat) on a similar principle. The
function is then plumbed to CameraSensor::sizes(mbusCode) to enumerate
the per-format sizes as required in
PipelineHandlerIPU3::generateConfiguration().
Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
Tested-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
|
|
The offset variable is introduced to FrameBuffer::Plane. In order to
detect that the plane is used while the offset is not set, this adds
the assertion to FrameBuffer::planes(). It should be removed in the
future.
Signed-off-by: Hirokazu Honda <hiroh@chromium.org>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
CameraDevice::CreateFrameBuffer() fills the length of the buffer to
each FrameBuffer::Plane::length. It should rather be the length of
plane. This also changes CreateFrameBuffer() to fill offset of
FrameBuffer::Plane.
Signed-off-by: Hirokazu Honda <hiroh@chromium.org>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
V4L2VideDevice::createBuffer() creates the same number of
FrameBuffer::Planes as V4L2 format planes. Therefore, if the v4l2 format
single is single-planar format, the created number of
FrameBuffer::Planes is 1. It should rather create the same number of
FrameBuffer::Planes as the color format planes.
Signed-off-by: Hirokazu Honda <hiroh@chromium.org>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|
|
The plane length is the length of the plane size. The buffer length
to be allocated for a plane is the offset and the length of
FrameBuffer::Plane.
Signed-off-by: Hirokazu Honda <hiroh@chromium.org>
Reviewed-by: Nicolas Dufresne <nicolas.dufresne@collabora.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
|