summaryrefslogtreecommitdiff
path: root/src/android/camera_stream.cpp
AgeCommit message (Collapse)Author
2021-10-26android: post_processor: Make post processing asyncUmang Jain
Introduce a dedicated worker class derived from libcamera::Thread. The worker class maintains a queue for post-processing requests and waits for a post-processing request to become available. It will process them as per FIFO before de-queuing it from the queue. The entire post-processing handling iteration is locked under streamsProcessMutex_ which helps us to queue all the post-processing request at once, before any of the post-processing completion slot (streamProcessingComplete()) is allowed to run for post-processing requests completing in parallel. This helps us to manage both synchronous and asynchronous errors encountered during the entire post processing operation. Since a post-processing operation can even complete after CameraDevice::requestComplete() has returned, we need to check and complete the descriptor from streamProcessingComplete() running in the PostProcessorWorker's thread. This patch also implements a flush() for the PostProcessorWorker class which is responsible to purge post-processing requests queued up while a camera is stopping/flushing. It is hooked with CameraStream::flush(), which isn't used currently but will be used when we handle flush/stop scenarios in greater detail subsequently (in a different patchset). The libcamera request completion handler CameraDevice::requestComplete() assumes that the request that has just completed is at the front of the queue. Now that the post-processor runs asynchronously, this isn't true anymore, a request being post-processed will stay in the queue and a new libcamera request may complete. Remove that assumption, and use the request cookie to obtain the Camera3RequestDescriptor. Signed-off-by: Umang Jain <umang.jain@ideasonboard.com> Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Reviewed-by: Hirokazu Honda <hiroh@chromium.org> Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2021-10-26android: post_processor: Drop return value for process()Umang Jain
PostProcessor::process() is invoked by CameraStream class in case any post-processing is required for the camera stream. The failure or success is checked via the value returned by CameraStream::process(). Now that the post-processor notifies about the post-processing completion operation, we can drop the return value of PostProcessor::process(). The status of post-processing is passed to CameraDevice::streamProcessingComplete() by the PostProcessor::processComplete signal's slot. Signed-off-by: Umang Jain <umang.jain@ideasonboard.com> Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Reviewed-by: Hirokazu Honda <hiroh@chromium.org>
2021-10-26android: Track and notify post processing of streamsUmang Jain
Notify that the post processing for a request has been completed, via a signal. The signal is emitted with a context pointer along with status of the buffer. The function CameraDevice::streamProcessingComplete() will finally set the status on the request descriptor and complete the descriptor if all the streams requiring post processing are completed. If buffer status obtained is in error state, notify the status to the framework and set the overall error status on the descriptor via setBufferStatus(). We need to track the number of streams requiring post-processing per Camera3RequestDescriptor (i.e. per capture request). Introduce a std::map to track the post-processing of streams. The nodes are dropped from the map when a particular stream post processing is completed (or on error paths). A std::map is selected for tracking post-processing requests, since we will move post-processing to be asynchronous in subsequent commits. A vector or queue will not be suitable as the sequential order of post-processing completion of various requests won't be guaranteed then. A streamsProcessMutex_ has been introduced here as well, which will be applicable to guard access to descriptor's pendingStreamsToProcess_ when post-processing is moved to be asynchronous in subsequent commits. Signed-off-by: Umang Jain <umang.jain@ideasonboard.com> Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Reviewed-by: Hirokazu Honda <hiroh@chromium.org>
2021-10-26android: post_processor: Consolidate contextual informationUmang Jain
Save and provide the context for post-processor of a camera stream via Camera3RequestDescriptor::StreamBuffer. We extend the structure to include source and destination buffers for the post processor, along with CameraStream::Type::Internal buffer pointer (if any). In addition to that, a back pointer to Camera3RequestDescriptor is convenient to get access to overall descriptor (status, metadata settings etc.). Also, migrate CameraStream::process() and PostProcessor::process() signature to use Camera3RequestDescriptor::StreamBuffer only. This will be helpful when we move to async post-processing in subsequent commits. Signed-off-by: Umang Jain <umang.jain@ideasonboard.com> Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Reviewed-by: Hirokazu Honda <hiroh@chromium.org>
2021-10-26android: camera_stream: Replace post-processor nullptr checkUmang Jain
Instead of checking postProcessor for nullptr, replace this check with an assertion that checks if the camera stream's type is not Type::Direct. Since it makes no sense to call CameraStream::process() on a Type::Direct camera stream. Signed-off-by: Umang Jain <umang.jain@ideasonboard.com> Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Reviewed-by: Hirokazu Honda <hiroh@chromium.org> Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2021-10-19android: camera_stream: Define explicit move constructor and destructorsLaurent Pinchart
There's no need for the move constructor and the destructor to be inline. Define them explicitly, with default implementations. This allows usage of the CameraStream class without a complete definition of the PostProcessor class. Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Reviewed-by: Umang Jain <umang.jain@ideasonboard.com> Reviewed-by: Jacopo Mondi <jacopo@jmondi.org> Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
2021-10-19android: camera_stream: Don't close fence if wait failsLaurent Pinchart
The camera HAL APIs requires that any acquire fence that hasn't been waited on to be sent back to the framework as a release fence. The CameraDevice already copies the acquire fence to the release fence when signaling request completion, but the CameraStream incorrectly closes the fence when a wait fails and sets it to -1. Fix it. Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Reviewed-by: Umang Jain <umang.jain@ideasonboard.com> Reviewed-by: Jacopo Mondi <jacopo@jmondi.org> Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
2021-10-19android: camera_request: Don't embed full camera3_stream_buffer_tLaurent Pinchart
The camera3_stream_buffer_t structure is meant to communicate between the camera service and the HAL. They are short-live structures that don't outlive the .process_capture_request() operation (when queuing requests) or the .process_capture_result() callback. We currently store copies of the camera3_stream_buffer_t passed to .process_capture_request() in Camera3RequestDescriptor::StreamBuffer to store the structure members that the HAL need, and reuse them when calling the .process_capture_result() callback. This is conceptually not right, as the camera3_stream_buffer_t pass to the callback are not the same objects as the ones received in .process_capture_request(). Store individual fields of the camera3_stream_buffer_t in StreamBuffer instead of copying the whole structure. This gives the HAL full control of how data is stored, and properly decouples request queueing from result reporting. Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by: Umang Jain <umang.jain@ideasonboard.com> Reviewed-by: Umang Jain <umang.jain@ideasonboard.com> Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
2021-10-19android: camera_stream: Pass StreamBuffer to process()Laurent Pinchart
Now that we have a proper structure to model a stream buffer, pass it to CameraStream::process() instead of the camera3_stream_buffer_t. This will allow accessing other members of StreamBuffer in subsequent commits. Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Reviewed-by: Jacopo Mondi <jacopo@jmondi.org> Reviewed-by: Hirokazu Honda <hiroh@chromium.org> Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
2021-10-19android: camera_stream: Plumb process() with Camera3RequestDescriptorUmang Jain
Data (or broader context) required for post processing of a camera request is saved via Camera3RequestDescriptor. Instead of passing individual arguments to CameraStream::process(), pass the Camera3RequestDescriptor pointer to it. All the arguments necessary to run the post-processor can be accessed from the descriptor. Signed-off-by: Umang Jain <umang.jain@ideasonboard.com> Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Reviewed-by: Jacopo Mondi <jacopo@jmondi.org> Reviewed-by: Hirokazu Honda <hiroh@chromium.org>
2021-10-10android: camera_stream: Fix error message for buffer creationLaurent Pinchart
Creating a CameraBuffer instance doesn't map memory. Fix the error message accordingly. Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Reviewed-by: Umang Jain <umang.jain@ideasonboard.com> Signed-off-by: Umang Jain <umang.jain@ideasonboard.com> Reviewed-by: Hirokazu Honda <hiroh@chromium.org>
2021-10-06android: camera_stream: Set right format for processor output bufferHirokazu Honda
CameraStream always sets the format of processor output buffer to MJPEG. This fixes the issue. Signed-off-by: Hirokazu Honda <hiroh@chromium.org> Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Reviewed-by: Umang Jain <umang.jain@ideasonboard.com> Signed-off-by: Umang Jain <umang.jain@ideasonboard.com>
2021-09-29android: Wait on fences in CameraStream::process()Jacopo Mondi
Acquire fences for streams of type Mapped generated by post-processing are not correctly handled and are currently ignored by the camera HAL. Fix this by adding CameraStream::waitFence(), executed before starting the post-processing in CameraStream::process(). The change applies to all streams generated by post-processing (Mapped and Internal) but currently acquire fences of Internal streams are handled by the camera worker. Postpone that to post-processing time by passing -1 to the Worker for Internal streams. Also correct the release_fence handling for failed captures, as the framework requires the release fences to be set to the acquire fence value if the acquire fence has not been waited on. Signed-off-by: Jacopo Mondi <jacopo@jmondi.org> Reviewed-by: Hirokazu Honda <hiroh@chromium.org> Tested-by: Umang Jain <umang.jain@ideasonboard.com> Reviewed-by: Umang Jain <umang.jain@ideasonboard.com> Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2021-09-22android: camera_stream: Support PostProcessorYuv in CameraStreamHirokazu Honda
CameraStream creates PostProcessorYuv if the destination format is NV12. Signed-off-by: Hirokazu Honda <hiroh@chromium.org> Reviewed-by: Jacopo Mondi <jacopo@jmondi.org> Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2021-09-22android: camera_stream: Create post processor in configure()Hirokazu Honda
CameraStream creates PostProcessor and FrameBufferAllocator in the constructor. CameraStream assumes that a used post processor is JPEG post processor. Since we need to support various post processors, we would rather move the creation to configure() so as to return an error code if no proper post processor is found. This also moves FrameBufferAllocator and Mutex creation for consistency. Signed-off-by: Hirokazu Honda <hiroh@chromium.org> Reviewed-by: Jacopo Mondi <jacopo@jmondi.org> Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2021-09-06android: Cleanup libcamera namespace usageUmang Jain
Usually .cpp files are equipped with using namespace libcamera; Hence, it is unnecessary mentioning the explicit namespace of libcamera at certain places. While at it, a small typo in a comment was noticed and fixed as part of this patch. Signed-off-by: Umang Jain <umang.jain@ideasonboard.com> Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Reviewed-by: Hirokazu Honda <hiroh@chromium.org>
2021-08-27android: generic_camera_buffer: Correct buffer mappingHirokazu Honda
buffer_handle_t doesn't provide sufficient info to map a buffer properly. cros::CameraBufferManager enables handling the buffer on ChromeOS, but no way is provided for other platforms. Therefore, we put the assumption that planes are in the same buffer and they are consecutive. This modifies the way of mapping in generic_camera_buffer with the assumption. Signed-off-by: Hirokazu Honda <hiroh@chromium.org> Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Reviewed-by: Jacopo Mondi <jacopo@jmondi.org> Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2021-08-10libcamera: Give MappedFrameBuffer its own implementationKieran Bingham
The MappedFrameBuffer is a convenience feature which sits on top of the FrameBuffer and facilitates mapping it to CPU accessible memory with mmap. This implementation is internal and currently sits in the same internal files as the internal FrameBuffer, thus exposing those internals to users of the MappedFramebuffer implementation. Move the MappedFrameBuffer and MappedBuffer implementation to its own implementation files, and fix the sources throughout to use that accordingly. Reviewed-by: Hirokazu Honda <hiroh@chromium.org> Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2021-06-28android: camera_device: Fix null pointer dereferenceLaurent Pinchart
Commit 7532caa2c77b ("android: camera_device: Reset config_ if Camera::configure() fails") reworked the configuration sequence to ensure that the CameraConfiguration pointers gets reset when configuration fails. This inadvertently causes a null pointer dereference, as the CameraStream constructor accesses the camera configuration through CameraDevice::cameraConfiguration() before the internal config_ pointer is set. Fix this by passing the configuration pointer explicitly to the CameraStream constructor. Fixes: 7532caa2c77b ("android: camera_device: Reset config_ if Camera::configure() fails") Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Reviewed-by: Paul Elder <paul.elder@ideasonboard.com> Tested-by: Paul Elder <paul.elder@ideasonboard.com> Reviewed-by: Umang Jain <umang.jain@ideasonboard.com> Tested-by: Umang Jain <umang.jain@ideasonboard.com> Reviewed-by: Hirokazu Honda <hiroh@chromium.org>
2021-03-03android: Move buffer mapping to CameraStreamJacopo Mondi
The destination buffer for the post-processing component is currently first mapped in the CameraDevice class and then passed to CameraStream which simply calls the post-processor interface. Move the mapping to CameraStream::process() to tie the buffer mapping to the lifetime of the CameraBuffer instance. Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
2021-02-02android: post_processor: Change the type destination in process()Hirokazu Honda
The type of the destination buffer in PostProcessor::process() is libcamera::Span. libcamera::Span is used for one dimension buffer (e.g. blob buffer). The destination can be multiple dimensions buffer (e.g. yuv frame). Therefore, this changes the type of the destination buffer to MappedFrameBuffer. Signed-off-by: Hirokazu Honda <hiroh@chromium.org> Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com> Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Reviewed-by: Jacopo Mondi <jacopo@jmondi.org> Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
2021-01-27android: Set result metadata and EXIF fields based on request metadataPaul Elder
Set the following android result metadata: - ANDROID_LENS_FOCAL_LENGTH - ANDROID_LENS_APERTURE - ANDROID_JPEG_GPS_TIMESTAMP - ANDROID_JPEG_GPS_COORDINATES - ANDROID_JPEG_GPS_PROCESSING_METHOD And the following EXIF fields: - GPSDatestamp - GPSTimestamp - GPSLocation - GPSLatitudeRef - GPSLatitude - GPSLongitudeRef - GPSLongitude - GPSAltitudeRef - GPSAltitude - GPSProcessingMethod - FocalLength - ExposureTime - FNumber - ISO - Flash - WhiteBalance - SubsecTime - SubsecTimeOriginal - SubsecTimeDigitized Based on android request metadata. This allows the following CTS tests to pass: - android.hardware.camera2.cts.StillCaptureTest#testFocalLengths - android.hardware.camera2.cts.StillCaptureTest#testJpegExif Signed-off-by: Paul Elder <paul.elder@ideasonboard.com> Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Reviewed-by: Jacopo Mondi <jacopo@jmondi.org>
2020-10-21android: camera_stream: Make some member variables constantHirokazu Honda
CameraStream initializes several member variables in the initializer list. Some of them are unchanged after. This makes them constant. Especially, doing to |cameraDevice_| represents CameraStream doesn't have the ownership of it. Signed-off-by: Hirokazu Honda <hiroh@chromium.org> Reviewed-by: Umang Jain <email@uajain.com> Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com> Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2020-10-21android: Modify PostProcessor interfaceHirokazu Honda
In PostProcessor::process(), the |source| argument doesn't have to be a pointer. This replaces its type, const pointer, with const reference as the latter is preferred to the former. libcamera::Span is cheap to construct/copy/move. We should deal with the type as pass-by-value parameter. Therefore this also drops the const reference in the |destination| argument. Signed-off-by: Hirokazu Honda <hiroh@chromium.org> Reviewed-by: Umang Jain <email@uajain.com> Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com> Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2020-10-20android: Omit extra semicolonsHirokazu Honda
The end semicolons with LOG_DECLARE_CATEGORY and LOG_DEFINE_CATEGORY are unnecessary. Signed-off-by: Hirokazu Honda <hiroh@chromium.org> Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Reviewed-by: Niklas Söderlund <niklas.soderlund@ragnatech.se> Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com> Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2020-10-16android: jpeg: Port to PostProcessor interfaceUmang Jain
Port the CameraStream's JPEG-encoding bits to PostProcessorJpeg. This encapsulates the encoder and EXIF generation code into the PostProcessorJpeg layer and removes these specifics related to JPEG, from the CameraStream itself. Signed-off-by: Umang Jain <email@uajain.com> Tested-by: Kieran Bingham <kieran.bingham@ideasonboard.com> Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com> Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com> Change-Id: Id9e6e9b2bec83493a90e5e126298a2bb2ed2232a
2020-10-14android: camera_stream: Add documentationJacopo Mondi
Add a brief documentation block to the CameraStream class. Reviewed-by: Niklas Söderlund <niklas.soderlund@ragnatech.se> Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
2020-10-07android: camera_stream: Create buffer poolJacopo Mondi
Add a FrameBufferAllocator class member to the CameraStream class. The allocator is constructed for CameraStream instances that needs internal allocation and automatically deleted. Allocate FrameBuffers using the allocator_ class member in the CameraStream class at CameraStream::configure() time and add two methods to the CameraStream class to get and put FrameBuffer pointers from the pool of allocated buffers. As buffer allocation can take place only after the Camera has been configured, move the CameraStream configuration loop in the CameraDevice class after camera_->configure() call. The newly created pool will be used to provide buffers to CameraStream that need to provide memory to libcamera where to deliver frames. Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com> Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
2020-10-07android: camera_device: Make CameraStream configuration nicerJacopo Mondi
Loop over the CameraStream instances and use their interface to perform CameraStream configuration. Modify CameraStream::configure() to configure the android stream buffer count and to retrieve the StreamConfiguration by index instead of receiving it as a parameter. Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com> Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
2020-10-07android: camera_stream: Fetch format and size from configurationJacopo Mondi
Fetch the format and size of the libcamera::StreamConfiguration associated with a CameraStream by accessing the configuration by index. This removes the need to store the libcamera stream format and sizes as class members and avoid duplicating information that might get out of sync. It also allows to remove the StreamConfiguration from the constructor parameters list, as it can be identified by its index. While at it, re-order the constructor parameters order. Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com> Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
2020-10-07android: camera_stream: Retrieve Stream and ConfigurationJacopo Mondi
It's a common pattern to access the libcamera::Stream and libcamera::StreamConfiguration using the CameraStream instance's index. Add two methods to the CameraStream to shorten access to the two fields. This allows removing the index() method from the class interface. Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com> Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
2020-10-07android: camera_device: Move processing to CameraStreamJacopo Mondi
Move the JPEG processing procedure to the individual CameraStream by augmenting the class with a CameraStream::process() method. This allows removing the CameraStream::encoder() method. Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com> Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
2020-10-07android: camera_stream: Construct with Android streamJacopo Mondi
Pass the android camera3_stream_t, and a libcamera::StreamConfiguration to identify the source and destination parameters of this stream. Pass a CameraDevice pointer to the CameraStream constructor to allow retrieval of the StreamConfiguration associated with the CameraStream. Also change the format on which the CameraDevice performs checks to decide if post-processing is required, as the libcamera facing format is not meaningful anymore, but the Android requested format should be used instead. Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com> Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
2020-10-07android: camera_stream: Delegate Encoder constructionJacopo Mondi
Delegate the construction of the encoder to the CameraStream class for streams that need post-processing. Reviewed-by: Umang Jain <email@uajain.com> Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com> Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
2020-10-07android: camera_stream: Add CameraStream::TypeJacopo Mondi
Define the CameraStream::Type enumeration and assign it to each CameraStream instance at construction time. The CameraStream type will be used to decide if memory needs to be allocated on its behalf or if the stream is backed by memory externally allocated by the Android framework. Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com> Reviewed-by: Hirokazu Honda <hiroh@chromium.org> Reviewed-by: Niklas Söderlund <niklas.soderlund@ragnatech.se> Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
2020-10-07android: camera_stream: Break out CameraStreamJacopo Mondi
Break CameraStream out of the CameraDevice class. No functional changes, only the code is moved. Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com> Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by: Jacopo Mondi <jacopo@jmondi.org>
n1122'>1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423
/* SPDX-License-Identifier: LGPL-2.1-or-later */
/*
 * Copyright (C) 2019, Google Inc.
 *
 * Pipeline handler for Intel IPU3
 */

#include <algorithm>
#include <memory>
#include <queue>
#include <vector>

#include <linux/intel-ipu3.h>

#include <libcamera/base/log.h>
#include <libcamera/base/utils.h>

#include <libcamera/camera.h>
#include <libcamera/control_ids.h>
#include <libcamera/formats.h>
#include <libcamera/ipa/ipu3_ipa_interface.h>
#include <libcamera/ipa/ipu3_ipa_proxy.h>
#include <libcamera/property_ids.h>
#include <libcamera/request.h>
#include <libcamera/stream.h>

#include "libcamera/internal/camera.h"
#include "libcamera/internal/camera_lens.h"
#include "libcamera/internal/camera_sensor.h"
#include "libcamera/internal/delayed_controls.h"
#include "libcamera/internal/device_enumerator.h"
#include "libcamera/internal/framebuffer.h"
#include "libcamera/internal/ipa_manager.h"
#include "libcamera/internal/media_device.h"
#include "libcamera/internal/pipeline_handler.h"

#include "cio2.h"
#include "frames.h"
#include "imgu.h"

namespace libcamera {

LOG_DEFINE_CATEGORY(IPU3)

static const ControlInfoMap::Map IPU3Controls = {
	{ &controls::draft::PipelineDepth, ControlInfo(2, 3) },
};

class IPU3CameraData : public Camera::Private
{
public:
	IPU3CameraData(PipelineHandler *pipe)
		: Camera::Private(pipe)
	{
	}

	int loadIPA();

	void imguOutputBufferReady(FrameBuffer *buffer);
	void cio2BufferReady(FrameBuffer *buffer);
	void paramBufferReady(FrameBuffer *buffer);
	void statBufferReady(FrameBuffer *buffer);
	void queuePendingRequests();
	void cancelPendingRequests();
	void frameStart(uint32_t sequence);

	CIO2Device cio2_;
	ImgUDevice *imgu_;

	Stream outStream_;
	Stream vfStream_;
	Stream rawStream_;

	Rectangle cropRegion_;

	std::unique_ptr<DelayedControls> delayedCtrls_;
	IPU3Frames frameInfos_;

	std::unique_ptr<ipa::ipu3::IPAProxyIPU3> ipa_;

	/* Requests for which no buffer has been queued to the CIO2 device yet. */
	std::queue<Request *> pendingRequests_;
	/* Requests queued to the CIO2 device but not yet processed by the ImgU. */
	std::queue<Request *> processingRequests_;

	ControlInfoMap ipaControls_;

private:
	void metadataReady(unsigned int id, const ControlList &metadata);
	void paramsBufferReady(unsigned int id);
	void setSensorControls(unsigned int id, const ControlList &sensorControls,
			       const ControlList &lensControls);
};

class IPU3CameraConfiguration : public CameraConfiguration
{
public:
	static constexpr unsigned int kBufferCount = 4;
	static constexpr unsigned int kMaxStreams = 3;

	IPU3CameraConfiguration(IPU3CameraData *data);

	Status validate() override;

	const StreamConfiguration &cio2Format() const { return cio2Configuration_; }
	const ImgUDevice::PipeConfig imguConfig() const { return pipeConfig_; }

	/* Cache the combinedTransform_ that will be applied to the sensor */
	Transform combinedTransform_;

private:
	/*
	 * The IPU3CameraData instance is guaranteed to be valid as long as the
	 * corresponding Camera instance is valid. In order to borrow a
	 * reference to the camera data, store a new reference to the camera.
	 */
	const IPU3CameraData *data_;

	StreamConfiguration cio2Configuration_;
	ImgUDevice::PipeConfig pipeConfig_;
};

class PipelineHandlerIPU3 : public PipelineHandler
{
public:
	static constexpr unsigned int V4L2_CID_IPU3_PIPE_MODE = 0x009819c1;
	static constexpr Size kViewfinderSize{ 1280, 720 };

	enum IPU3PipeModes {
		IPU3PipeModeVideo = 0,
		IPU3PipeModeStillCapture = 1,
	};

	PipelineHandlerIPU3(CameraManager *manager);

	std::unique_ptr<CameraConfiguration> generateConfiguration(Camera *camera,
								   Span<const StreamRole> roles) override;
	int configure(Camera *camera, CameraConfiguration *config) override;

	int exportFrameBuffers(Camera *camera, Stream *stream,
			       std::vector<std::unique_ptr<FrameBuffer>> *buffers) override;

	int start(Camera *camera, const ControlList *controls) override;
	void stopDevice(Camera *camera) override;

	int queueRequestDevice(Camera *camera, Request *request) override;

	bool match(DeviceEnumerator *enumerator) override;

private:
	IPU3CameraData *cameraData(Camera *camera)
	{
		return static_cast<IPU3CameraData *>(camera->_d());
	}

	int initControls(IPU3CameraData *data);
	int updateControls(IPU3CameraData *data);
	int registerCameras();

	int allocateBuffers(Camera *camera);
	int freeBuffers(Camera *camera);

	ImgUDevice imgu0_;
	ImgUDevice imgu1_;
	MediaDevice *cio2MediaDev_;
	MediaDevice *imguMediaDev_;

	std::vector<IPABuffer> ipaBuffers_;
};

IPU3CameraConfiguration::IPU3CameraConfiguration(IPU3CameraData *data)
	: CameraConfiguration()
{
	data_ = data;
}

CameraConfiguration::Status IPU3CameraConfiguration::validate()
{
	Status status = Valid;

	if (config_.empty())
		return Invalid;

	/*
	 * Validate the requested transform against the sensor capabilities and
	 * rotation and store the final combined transform that configure() will
	 * need to apply to the sensor to save us working it out again.
	 */
	Orientation requestedOrientation = orientation;
	combinedTransform_ = data_->cio2_.sensor()->computeTransform(&orientation);
	if (orientation != requestedOrientation)
		status = Adjusted;

	/* Cap the number of entries to the available streams. */
	if (config_.size() > kMaxStreams) {
		config_.resize(kMaxStreams);
		status = Adjusted;
	}

	/*
	 * Validate the requested stream configuration and select the sensor
	 * format by collecting the maximum RAW stream width and height and
	 * picking the closest larger match.
	 *
	 * If no RAW stream is requested use the one of the largest YUV stream,
	 * plus margin pixels for the IF and BDS rectangle to downscale.
	 *
	 * \todo Clarify the IF and BDS margins requirements.
	 */
	unsigned int rawCount = 0;
	unsigned int yuvCount = 0;
	Size rawRequirement;
	Size maxYuvSize;
	Size rawSize;

	for (const StreamConfiguration &cfg : config_) {
		const PixelFormatInfo &info = PixelFormatInfo::info(cfg.pixelFormat);

		if (info.colourEncoding == PixelFormatInfo::ColourEncodingRAW) {
			rawCount++;
			rawSize = std::max(rawSize, cfg.size);
		} else {
			yuvCount++;
			maxYuvSize = std::max(maxYuvSize, cfg.size);
			rawRequirement.expandTo(cfg.size);
		}
	}

	if (rawCount > 1 || yuvCount > 2) {
		LOG(IPU3, Debug) << "Camera configuration not supported";
		return Invalid;
	} else if (rawCount && !yuvCount) {
		/*
		 * Disallow raw-only camera configuration. Currently, ImgU does
		 * not get configured for raw-only streams and has early return
		 * in configure(). To support raw-only stream, we do need the IPA
		 * to get configured since it will setup the sensor controls for
		 * the capture.
		 *
		 * \todo Configure the ImgU with internal buffers which will enable
		 * the IPA to get configured for the raw-only camera configuration.
		 */
		LOG(IPU3, Debug)
			<< "Camera configuration cannot support raw-only streams";
		return Invalid;
	}

	/*
	 * Generate raw configuration from CIO2.
	 *
	 * The output YUV streams will be limited in size to the maximum frame
	 * size requested for the RAW stream, if present.
	 *
	 * If no raw stream is requested, generate a size from the largest YUV
	 * stream, aligned to the ImgU constraints and bound
	 * by the sensor's maximum resolution. See
	 * https://bugs.libcamera.org/show_bug.cgi?id=32
	 */
	if (rawSize.isNull())
		rawSize = rawRequirement.expandedTo({ ImgUDevice::kIFMaxCropWidth,
						      ImgUDevice::kIFMaxCropHeight })
				  .grownBy({ ImgUDevice::kOutputMarginWidth,
					     ImgUDevice::kOutputMarginHeight })
				  .boundedTo(data_->cio2_.sensor()->resolution());

	cio2Configuration_ = data_->cio2_.generateConfiguration(rawSize);
	if (!cio2Configuration_.pixelFormat.isValid())
		return Invalid;

	LOG(IPU3, Debug) << "CIO2 configuration: " << cio2Configuration_.toString();

	ImgUDevice::Pipe pipe{};
	pipe.input = cio2Configuration_.size;

	/*
	 * Adjust the configurations if needed and assign streams while
	 * iterating them.
	 */
	bool mainOutputAvailable = true;
	for (unsigned int i = 0; i < config_.size(); ++i) {
		const PixelFormatInfo &info = PixelFormatInfo::info(config_[i].pixelFormat);
		const StreamConfiguration originalCfg = config_[i];
		StreamConfiguration *cfg = &config_[i];

		LOG(IPU3, Debug) << "Validating stream: " << config_[i].toString();

		if (info.colourEncoding == PixelFormatInfo::ColourEncodingRAW) {
			/* Initialize the RAW stream with the CIO2 configuration. */
			cfg->size = cio2Configuration_.size;
			cfg->pixelFormat = cio2Configuration_.pixelFormat;
			cfg->bufferCount = cio2Configuration_.bufferCount;
			cfg->stride = info.stride(cfg->size.width, 0, 64);
			cfg->frameSize = info.frameSize(cfg->size, 64);
			cfg->setStream(const_cast<Stream *>(&data_->rawStream_));

			LOG(IPU3, Debug) << "Assigned " << cfg->toString()
					 << " to the raw stream";
		} else {
			/* Assign and configure the main and viewfinder outputs. */

			/*
			 * Clamp the size to match the ImgU size limits and the
			 * margins from the CIO2 output frame size.
			 *
			 * The ImgU outputs needs to be strictly smaller than
			 * the CIO2 output frame and rounded down to 64 pixels
			 * in width and 32 pixels in height. This assumption
			 * comes from inspecting the pipe configuration script
			 * results and the available suggested configurations in
			 * the ChromeOS BSP .xml camera tuning files and shall
			 * be validated.
			 *
			 * \todo Clarify what are the hardware constraints
			 * that require this alignements, if any. It might
			 * depend on the BDS scaling factor of 1/32, as the main
			 * output has no YUV scaler as the viewfinder output has.
			 */
			unsigned int limit;
			limit = utils::alignDown(cio2Configuration_.size.width - 1,
						 ImgUDevice::kOutputMarginWidth);
			cfg->size.width = std::clamp(cfg->size.width,
						     ImgUDevice::kOutputMinSize.width,
						     limit);

			limit = utils::alignDown(cio2Configuration_.size.height - 1,
						 ImgUDevice::kOutputMarginHeight);
			cfg->size.height = std::clamp(cfg->size.height,
						      ImgUDevice::kOutputMinSize.height,
						      limit);

			cfg->size.alignDownTo(ImgUDevice::kOutputAlignWidth,
					      ImgUDevice::kOutputAlignHeight);

			cfg->pixelFormat = formats::NV12;
			cfg->bufferCount = kBufferCount;
			cfg->stride = info.stride(cfg->size.width, 0, 1);
			cfg->frameSize = info.frameSize(cfg->size, 1);

			/*
			 * Use the main output stream in case only one stream is
			 * requested or if the current configuration is the one
			 * with the maximum YUV output size.
			 */
			if (mainOutputAvailable &&
			    (originalCfg.size == maxYuvSize || yuvCount == 1)) {
				cfg->setStream(const_cast<Stream *>(&data_->outStream_));
				mainOutputAvailable = false;

				pipe.main = cfg->size;
				if (yuvCount == 1)
					pipe.viewfinder = pipe.main;

				LOG(IPU3, Debug) << "Assigned " << cfg->toString()
						 << " to the main output";
			} else {
				cfg->setStream(const_cast<Stream *>(&data_->vfStream_));
				pipe.viewfinder = cfg->size;

				LOG(IPU3, Debug) << "Assigned " << cfg->toString()
						 << " to the viewfinder output";
			}
		}

		if (cfg->pixelFormat != originalCfg.pixelFormat ||
		    cfg->size != originalCfg.size) {
			LOG(IPU3, Debug)
				<< "Stream " << i << " configuration adjusted to "
				<< cfg->toString();
			status = Adjusted;
		}
	}

	/* Only compute the ImgU configuration if a YUV stream has been requested. */
	if (yuvCount) {
		pipeConfig_ = data_->imgu_->calculatePipeConfig(&pipe);
		if (pipeConfig_.isNull()) {
			LOG(IPU3, Error) << "Failed to calculate pipe configuration: "
					 << "unsupported resolutions.";
			return Invalid;
		}
	}

	return status;
}

PipelineHandlerIPU3::PipelineHandlerIPU3(CameraManager *manager)
	: PipelineHandler(manager), cio2MediaDev_(nullptr), imguMediaDev_(nullptr)
{
}

std::unique_ptr<CameraConfiguration>
PipelineHandlerIPU3::generateConfiguration(Camera *camera, Span<const StreamRole> roles)
{
	IPU3CameraData *data = cameraData(camera);
	std::unique_ptr<IPU3CameraConfiguration> config =
		std::make_unique<IPU3CameraConfiguration>(data);

	if (roles.empty())
		return config;

	Size sensorResolution = data->cio2_.sensor()->resolution();
	for (const StreamRole role : roles) {
		std::map<PixelFormat, std::vector<SizeRange>> streamFormats;
		unsigned int bufferCount;
		PixelFormat pixelFormat;
		Size size;

		switch (role) {
		case StreamRole::StillCapture:
			/*
			 * Use as default full-frame configuration a value
			 * strictly smaller than the sensor resolution (limited
			 * to the ImgU  maximum output size) and aligned down to
			 * the required frame margin.
			 *
			 * \todo Clarify the alignment constraints as explained
			 * in validate()
			 */
			size = sensorResolution.boundedTo(ImgUDevice::kOutputMaxSize)
					       .shrunkBy({ 1, 1 })
					       .alignedDownTo(ImgUDevice::kOutputMarginWidth,
							      ImgUDevice::kOutputMarginHeight);
			pixelFormat = formats::NV12;
			bufferCount = IPU3CameraConfiguration::kBufferCount;
			streamFormats[pixelFormat] = { { ImgUDevice::kOutputMinSize, size } };

			break;

		case StreamRole::Raw: {
			StreamConfiguration cio2Config =
				data->cio2_.generateConfiguration(sensorResolution);
			pixelFormat = cio2Config.pixelFormat;
			size = cio2Config.size;
			bufferCount = cio2Config.bufferCount;

			for (const PixelFormat &format : data->cio2_.formats())
				streamFormats[format] = data->cio2_.sizes(format);

			break;
		}

		case StreamRole::Viewfinder:
		case StreamRole::VideoRecording: {
			/*
			 * Default viewfinder and videorecording to 1280x720,
			 * capped to the maximum sensor resolution and aligned
			 * to the ImgU output constraints.
			 */
			size = sensorResolution.boundedTo(kViewfinderSize)
					       .alignedDownTo(ImgUDevice::kOutputAlignWidth,
							      ImgUDevice::kOutputAlignHeight);
			pixelFormat = formats::NV12;
			bufferCount = IPU3CameraConfiguration::kBufferCount;
			streamFormats[pixelFormat] = { { ImgUDevice::kOutputMinSize, size } };

			break;
		}

		default:
			LOG(IPU3, Error)
				<< "Requested stream role not supported: " << role;
			return nullptr;
		}

		StreamFormats formats(streamFormats);
		StreamConfiguration cfg(formats);
		cfg.size = size;
		cfg.pixelFormat = pixelFormat;
		cfg.bufferCount = bufferCount;
		config->addConfiguration(cfg);
	}

	if (config->validate() == CameraConfiguration::Invalid)
		return {};

	return config;
}

int PipelineHandlerIPU3::configure(Camera *camera, CameraConfiguration *c)
{
	IPU3CameraConfiguration *config =
		static_cast<IPU3CameraConfiguration *>(c);
	IPU3CameraData *data = cameraData(camera);
	Stream *outStream = &data->outStream_;
	Stream *vfStream = &data->vfStream_;
	CIO2Device *cio2 = &data->cio2_;
	ImgUDevice *imgu = data->imgu_;
	V4L2DeviceFormat outputFormat;
	int ret;

	/*
	 * FIXME: enabled links in one ImgU pipe interfere with capture
	 * operations on the other one. This can be easily triggered by
	 * capturing from one camera and then trying to capture from the other
	 * one right after, without disabling media links on the first used
	 * pipe.
	 *
	 * The tricky part here is where to disable links on the ImgU instance
	 * which is currently not in use:
	 * 1) Link enable/disable cannot be done at start()/stop() time as video
	 * devices needs to be linked first before format can be configured on
	 * them.
	 * 2) As link enable has to be done at the least in configure(),
	 * before configuring formats, the only place where to disable links
	 * would be 'stop()', but the Camera class state machine allows
	 * start()<->stop() sequences without any configure() in between.
	 *
	 * As of now, disable all links in the ImgU media graph before
	 * configuring the device, to allow alternate the usage of the two
	 * ImgU pipes.
	 *
	 * As a consequence, a Camera using an ImgU shall be configured before
	 * any start()/stop() sequence. An application that wants to
	 * pre-configure all the camera and then start/stop them alternatively
	 * without going through any re-configuration (a sequence that is
	 * allowed by the Camera state machine) would now fail on the IPU3.
	 */
	ret = imguMediaDev_->disableLinks();
	if (ret)
		return ret;

	/*
	 * \todo Enable links selectively based on the requested streams.
	 * As of now, enable all links unconditionally.
	 * \todo Don't configure the ImgU at all if we only have a single
	 * stream which is for raw capture, in which case no buffers will
	 * ever be queued to the ImgU.
	 */
	ret = data->imgu_->enableLinks(true);
	if (ret)
		return ret;

	/*
	 * Pass the requested stream size to the CIO2 unit and get back the
	 * adjusted format to be propagated to the ImgU output devices.
	 */
	const Size &sensorSize = config->cio2Format().size;
	V4L2DeviceFormat cio2Format;
	ret = cio2->configure(sensorSize, config->combinedTransform_, &cio2Format);
	if (ret)
		return ret;

	IPACameraSensorInfo sensorInfo;
	cio2->sensor()->sensorInfo(&sensorInfo);
	data->cropRegion_ = sensorInfo.analogCrop;

	/*
	 * If the ImgU gets configured, its driver seems to expect that
	 * buffers will be queued to its outputs, as otherwise the next
	 * capture session that uses the ImgU fails when queueing
	 * buffers to its input.
	 *
	 * If no ImgU configuration has been computed, it means only a RAW
	 * stream has been requested: return here to skip the ImgU configuration
	 * part.
	 */
	ImgUDevice::PipeConfig imguConfig = config->imguConfig();
	if (imguConfig.isNull())
		return 0;

	ret = imgu->configure(imguConfig, &cio2Format);
	if (ret)
		return ret;

	/* Apply the format to the configured streams output devices. */
	StreamConfiguration *mainCfg = nullptr;
	StreamConfiguration *vfCfg = nullptr;

	for (unsigned int i = 0; i < config->size(); ++i) {
		StreamConfiguration &cfg = (*config)[i];
		Stream *stream = cfg.stream();

		if (stream == outStream) {
			mainCfg = &cfg;
			ret = imgu->configureOutput(cfg, &outputFormat);
			if (ret)
				return ret;
		} else if (stream == vfStream) {
			vfCfg = &cfg;
			ret = imgu->configureViewfinder(cfg, &outputFormat);
			if (ret)
				return ret;
		}
	}

	/*
	 * As we need to set format also on the non-active streams, use
	 * the configuration of the active one for that purpose (there should
	 * be at least one active stream in the configuration request).
	 */
	if (!vfCfg) {
		ret = imgu->configureViewfinder(*mainCfg, &outputFormat);
		if (ret)
			return ret;
	}

	/* Apply the "pipe_mode" control to the ImgU subdevice. */
	ControlList ctrls(imgu->imgu_->controls());
	/*
	 * Set the ImgU pipe mode to 'Video' unconditionally to have statistics
	 * generated.
	 *
	 * \todo Figure out what the 'Still Capture' mode is meant for, and use
	 * it accordingly.
	 */
	ctrls.set(V4L2_CID_IPU3_PIPE_MODE,
		  static_cast<int32_t>(IPU3PipeModeVideo));
	ret = imgu->imgu_->setControls(&ctrls);
	if (ret) {
		LOG(IPU3, Error) << "Unable to set pipe_mode control";
		return ret;
	}

	ipa::ipu3::IPAConfigInfo configInfo;
	configInfo.sensorControls = data->cio2_.sensor()->controls();

	CameraLens *lens = data->cio2_.sensor()->focusLens();
	if (lens)
		configInfo.lensControls = lens->controls();

	configInfo.sensorInfo = sensorInfo;
	configInfo.bdsOutputSize = config->imguConfig().bds;
	configInfo.iif = config->imguConfig().iif;

	ret = data->ipa_->configure(configInfo, &data->ipaControls_);
	if (ret) {
		LOG(IPU3, Error) << "Failed to configure IPA: "
				 << strerror(-ret);
		return ret;
	}

	return updateControls(data);
}

int PipelineHandlerIPU3::exportFrameBuffers(Camera *camera, Stream *stream,
					    std::vector<std::unique_ptr<FrameBuffer>> *buffers)
{
	IPU3CameraData *data = cameraData(camera);
	unsigned int count = stream->configuration().bufferCount;

	if (stream == &data->outStream_)
		return data->imgu_->output_->exportBuffers(count, buffers);
	else if (stream == &data->vfStream_)
		return data->imgu_->viewfinder_->exportBuffers(count, buffers);
	else if (stream == &data->rawStream_)
		return data->cio2_.exportBuffers(count, buffers);

	return -EINVAL;
}

/**
 * \todo Clarify if 'viewfinder' and 'stat' nodes have to be set up and
 * started even if not in use. As of now, if not properly configured and
 * enabled, the ImgU processing pipeline stalls.
 *
 * In order to be able to start the 'viewfinder' and 'stat' nodes, we need
 * memory to be reserved.
 */
int PipelineHandlerIPU3::allocateBuffers(Camera *camera)
{
	IPU3CameraData *data = cameraData(camera);
	ImgUDevice *imgu = data->imgu_;
	unsigned int bufferCount;
	int ret;

	bufferCount = std::max({
		data->outStream_.configuration().bufferCount,
		data->vfStream_.configuration().bufferCount,
		data->rawStream_.configuration().bufferCount,
	});

	ret = imgu->allocateBuffers(bufferCount);
	if (ret < 0)
		return ret;

	/* Map buffers to the IPA. */
	unsigned int ipaBufferId = 1;

	for (const std::unique_ptr<FrameBuffer> &buffer : imgu->paramBuffers_) {
		buffer->setCookie(ipaBufferId++);
		ipaBuffers_.emplace_back(buffer->cookie(), buffer->planes());
	}

	for (const std::unique_ptr<FrameBuffer> &buffer : imgu->statBuffers_) {
		buffer->setCookie(ipaBufferId++);
		ipaBuffers_.emplace_back(buffer->cookie(), buffer->planes());
	}

	data->ipa_->mapBuffers(ipaBuffers_);

	data->frameInfos_.init(imgu->paramBuffers_, imgu->statBuffers_);
	data->frameInfos_.bufferAvailable.connect(
		data, &IPU3CameraData::queuePendingRequests);

	return 0;
}

int PipelineHandlerIPU3::freeBuffers(Camera *camera)
{
	IPU3CameraData *data = cameraData(camera);

	data->frameInfos_.clear();

	std::vector<unsigned int> ids;
	for (IPABuffer &ipabuf : ipaBuffers_)
		ids.push_back(ipabuf.id);

	data->ipa_->unmapBuffers(ids);
	ipaBuffers_.clear();

	data->imgu_->freeBuffers();

	return 0;
}

int PipelineHandlerIPU3::start(Camera *camera, [[maybe_unused]] const ControlList *controls)
{
	IPU3CameraData *data = cameraData(camera);
	CIO2Device *cio2 = &data->cio2_;
	ImgUDevice *imgu = data->imgu_;
	int ret;

	/* Disable test pattern mode on the sensor, if any. */
	ret = cio2->sensor()->setTestPatternMode(
		controls::draft::TestPatternModeEnum::TestPatternModeOff);
	if (ret)
		return ret;

	/* Allocate buffers for internal pipeline usage. */
	ret = allocateBuffers(camera);
	if (ret)
		return ret;

	ret = data->ipa_->start();
	if (ret)
		goto error;

	data->delayedCtrls_->reset();

	/*
	 * Start the ImgU video devices, buffers will be queued to the
	 * ImgU output and viewfinder when requests will be queued.
	 */
	ret = cio2->start();
	if (ret)
		goto error;

	ret = imgu->start();
	if (ret)
		goto error;

	return 0;

error:
	imgu->stop();
	cio2->stop();
	data->ipa_->stop();
	freeBuffers(camera);
	LOG(IPU3, Error) << "Failed to start camera " << camera->id();

	return ret;
}

void PipelineHandlerIPU3::stopDevice(Camera *camera)
{
	IPU3CameraData *data = cameraData(camera);
	int ret = 0;

	data->cancelPendingRequests();

	data->ipa_->stop();

	ret |= data->imgu_->stop();
	ret |= data->cio2_.stop();
	if (ret)
		LOG(IPU3, Warning) << "Failed to stop camera " << camera->id();

	freeBuffers(camera);
}

void IPU3CameraData::cancelPendingRequests()
{
	processingRequests_ = {};

	while (!pendingRequests_.empty()) {
		Request *request = pendingRequests_.front();

		for (auto it : request->buffers()) {
			FrameBuffer *buffer = it.second;
			buffer->_d()->cancel();
			pipe()->completeBuffer(request, buffer);
		}

		pipe()->completeRequest(request);
		pendingRequests_.pop();
	}
}

void IPU3CameraData::queuePendingRequests()
{
	while (!pendingRequests_.empty()) {
		Request *request = pendingRequests_.front();

		IPU3Frames::Info *info = frameInfos_.create(request);
		if (!info)
			break;

		/*
		 * Queue a buffer on the CIO2, using the raw stream buffer
		 * provided in the request, if any, or a CIO2 internal buffer
		 * otherwise.
		 */
		FrameBuffer *reqRawBuffer = request->findBuffer(&rawStream_);
		FrameBuffer *rawBuffer = cio2_.queueBuffer(request, reqRawBuffer);
		/*
		 * \todo If queueBuffer fails in queuing a buffer to the device,
		 * report the request as error by cancelling the request and
		 * calling PipelineHandler::completeRequest().
		 */
		if (!rawBuffer) {
			frameInfos_.remove(info);
			break;
		}

		info->rawBuffer = rawBuffer;

		ipa_->queueRequest(info->id, request->controls());

		pendingRequests_.pop();
		processingRequests_.push(request);
	}
}

int PipelineHandlerIPU3::queueRequestDevice(Camera *camera, Request *request)
{
	IPU3CameraData *data = cameraData(camera);

	data->pendingRequests_.push(request);
	data->queuePendingRequests();

	return 0;
}

bool PipelineHandlerIPU3::match(DeviceEnumerator *enumerator)
{
	int ret;

	DeviceMatch cio2_dm("ipu3-cio2");
	cio2_dm.add("ipu3-csi2 0");
	cio2_dm.add("ipu3-cio2 0");
	cio2_dm.add("ipu3-csi2 1");
	cio2_dm.add("ipu3-cio2 1");
	cio2_dm.add("ipu3-csi2 2");
	cio2_dm.add("ipu3-cio2 2");
	cio2_dm.add("ipu3-csi2 3");
	cio2_dm.add("ipu3-cio2 3");

	DeviceMatch imgu_dm("ipu3-imgu");
	imgu_dm.add("ipu3-imgu 0");
	imgu_dm.add("ipu3-imgu 0 input");
	imgu_dm.add("ipu3-imgu 0 parameters");
	imgu_dm.add("ipu3-imgu 0 output");
	imgu_dm.add("ipu3-imgu 0 viewfinder");
	imgu_dm.add("ipu3-imgu 0 3a stat");
	imgu_dm.add("ipu3-imgu 1");
	imgu_dm.add("ipu3-imgu 1 input");
	imgu_dm.add("ipu3-imgu 1 parameters");
	imgu_dm.add("ipu3-imgu 1 output");
	imgu_dm.add("ipu3-imgu 1 viewfinder");
	imgu_dm.add("ipu3-imgu 1 3a stat");

	cio2MediaDev_ = acquireMediaDevice(enumerator, cio2_dm);
	if (!cio2MediaDev_)
		return false;

	imguMediaDev_ = acquireMediaDevice(enumerator, imgu_dm);
	if (!imguMediaDev_)
		return false;

	/*
	 * Disable all links that are enabled by default on CIO2, as camera
	 * creation enables all valid links it finds.
	 */
	if (cio2MediaDev_->disableLinks())
		return false;

	ret = imguMediaDev_->disableLinks();
	if (ret)
		return ret;

	ret = registerCameras();

	return ret == 0;
}

/**
 * \brief Initialize the camera controls
 * \param[in] data The camera data
 *
 * Initialize the camera controls by calculating controls which the pipeline
 * is reponsible for and merge them with the controls computed by the IPA.
 *
 * This function needs data->ipaControls_ to be initialized by the IPA init()
 * function at camera creation time. Always call this function after IPA init().
 *
 * \return 0 on success or a negative error code otherwise
 */
int PipelineHandlerIPU3::initControls(IPU3CameraData *data)
{
	/*
	 * \todo The controls initialized here depend on sensor configuration
	 * and their limits should be updated once the configuration gets
	 * changed.
	 *
	 * Initialize the sensor using its resolution and compute the control
	 * limits.
	 */
	CameraSensor *sensor = data->cio2_.sensor();
	V4L2SubdeviceFormat sensorFormat = {};
	sensorFormat.size = sensor->resolution();
	int ret = sensor->setFormat(&sensorFormat);
	if (ret)
		return ret;

	return updateControls(data);
}

/**
 * \brief Update the camera controls
 * \param[in] data The camera data
 *
 * Compute the camera controls by calculating controls which the pipeline
 * is reponsible for and merge them with the controls computed by the IPA.
 *
 * This function needs data->ipaControls_ to be refreshed when a new
 * configuration is applied to the camera by the IPA configure() function.
 *
 * Always call this function after IPA configure() to make sure to have a
 * properly refreshed IPA controls list.
 *
 * \return 0 on success or a negative error code otherwise
 */
int PipelineHandlerIPU3::updateControls(IPU3CameraData *data)
{
	CameraSensor *sensor = data->cio2_.sensor();
	IPACameraSensorInfo sensorInfo{};

	int ret = sensor->sensorInfo(&sensorInfo);
	if (ret)
		return ret;

	ControlInfoMap::Map controls = IPU3Controls;
	const std::vector<controls::draft::TestPatternModeEnum>
		&testPatternModes = sensor->testPatternModes();
	if (!testPatternModes.empty()) {
		std::vector<ControlValue> values;
		values.reserve(testPatternModes.size());

		for (auto pattern : testPatternModes)
			values.emplace_back(static_cast<int32_t>(pattern));

		controls[&controls::draft::TestPatternMode] = ControlInfo(values);
	}

	/*
	 * Compute the scaler crop limits.
	 *
	 * Initialize the control use the 'Viewfinder' configuration (1280x720)
	 * as the pipeline output resolution and the full sensor size as input
	 * frame (see the todo note in the validate() function about the usage
	 * of the sensor's full frame as ImgU input).
	 */

	/*
	 * The maximum scaler crop rectangle is the analogue crop used to
	 * produce the maximum frame size.
	 */
	const Rectangle &analogueCrop = sensorInfo.analogCrop;
	Rectangle maxCrop = analogueCrop;

	/*
	 * As the ImgU cannot up-scale, the minimum selection rectangle has to
	 * be as large as the pipeline output size. Use the default viewfinder
	 * configuration as the desired output size and calculate the minimum
	 * rectangle required to satisfy the ImgU processing margins, unless the
	 * sensor resolution is smaller.
	 *
	 * \todo This implementation is based on the same assumptions about the
	 * ImgU pipeline configuration described in then viewfinder and main
	 * output sizes calculation in the validate() function.
	 */

	/* The strictly smaller size than the sensor resolution, aligned to margins. */
	Size minSize = sensor->resolution().shrunkBy({ 1, 1 })
					   .alignedDownTo(ImgUDevice::kOutputMarginWidth,
							  ImgUDevice::kOutputMarginHeight);

	/*
	 * Either the smallest margin-aligned size larger than the viewfinder
	 * size or the adjusted sensor resolution.
	 */
	minSize = kViewfinderSize.grownBy({ 1, 1 })
				 .alignedUpTo(ImgUDevice::kOutputMarginWidth,
					      ImgUDevice::kOutputMarginHeight)
				 .boundedTo(minSize);

	/*
	 * Re-scale in the sensor's native coordinates. Report (0,0) as
	 * top-left corner as we allow application to freely pan the crop area.
	 */
	Rectangle minCrop = Rectangle(minSize).scaledBy(analogueCrop.size(),
							sensorInfo.outputSize);

	controls[&controls::ScalerCrop] = ControlInfo(minCrop, maxCrop, maxCrop);

	/* Add the IPA registered controls to list of camera controls. */
	for (const auto &ipaControl : data->ipaControls_)
		controls[ipaControl.first] = ipaControl.second;

	data->controlInfo_ = ControlInfoMap(std::move(controls),
					    controls::controls);

	return 0;
}

/**
 * \brief Initialise ImgU and CIO2 devices associated with cameras
 *
 * Initialise the two ImgU instances and create cameras with an associated
 * CIO2 device instance.
 *
 * \return 0 on success or a negative error code for error or if no camera
 * has been created
 * \retval -ENODEV no camera has been created
 */
int PipelineHandlerIPU3::registerCameras()
{
	int ret;

	ret = imgu0_.init(imguMediaDev_, 0);
	if (ret)
		return ret;

	ret = imgu1_.init(imguMediaDev_, 1);
	if (ret)
		return ret;

	/*
	 * For each CSI-2 receiver on the IPU3, create a Camera if an
	 * image sensor is connected to it and the sensor can produce images
	 * in a compatible format.
	 */
	unsigned int numCameras = 0;
	for (unsigned int id = 0; id < 4 && numCameras < 2; ++id) {
		std::unique_ptr<IPU3CameraData> data =
			std::make_unique<IPU3CameraData>(this);
		std::set<Stream *> streams = {
			&data->outStream_,
			&data->vfStream_,
			&data->rawStream_,
		};
		CIO2Device *cio2 = &data->cio2_;

		ret = cio2->init(cio2MediaDev_, id);
		if (ret)
			continue;

		ret = data->loadIPA();
		if (ret)
			continue;

		/* Initialize the camera properties. */
		data->properties_ = cio2->sensor()->properties();

		ret = initControls(data.get());
		if (ret)
			continue;

		/*
		 * \todo Read delay values from the sensor itself or from a
		 * a sensor database. For now use generic values taken from
		 * the Raspberry Pi and listed as 'generic values'.
		 */
		std::unordered_map<uint32_t, DelayedControls::ControlParams> params = {
			{ V4L2_CID_ANALOGUE_GAIN, { 1, false } },
			{ V4L2_CID_EXPOSURE, { 2, false } },
		};

		data->delayedCtrls_ =
			std::make_unique<DelayedControls>(cio2->sensor()->device(),
							  params);
		data->cio2_.frameStart().connect(data.get(),
						 &IPU3CameraData::frameStart);

		/* Convert the sensor rotation to a transformation */
		const auto &rotation = data->properties_.get(properties::Rotation);
		if (!rotation)
			LOG(IPU3, Warning) << "Rotation control not exposed by "
					   << cio2->sensor()->id()
					   << ". Assume rotation 0";

		/**
		 * \todo Dynamically assign ImgU and output devices to each
		 * stream and camera; as of now, limit support to two cameras
		 * only, and assign imgu0 to the first one and imgu1 to the
		 * second.
		 */
		data->imgu_ = numCameras ? &imgu1_ : &imgu0_;

		/*
		 * Connect video devices' 'bufferReady' signals to their
		 * slot to implement the image processing pipeline.
		 *
		 * Frames produced by the CIO2 unit are passed to the
		 * associated ImgU input where they get processed and
		 * returned through the ImgU main and secondary outputs.
		 */
		data->cio2_.bufferReady().connect(data.get(),
					&IPU3CameraData::cio2BufferReady);
		data->cio2_.bufferAvailable.connect(
			data.get(), &IPU3CameraData::queuePendingRequests);
		data->imgu_->input_->bufferReady.connect(&data->cio2_,
					&CIO2Device::tryReturnBuffer);
		data->imgu_->output_->bufferReady.connect(data.get(),
					&IPU3CameraData::imguOutputBufferReady);
		data->imgu_->viewfinder_->bufferReady.connect(data.get(),
					&IPU3CameraData::imguOutputBufferReady);
		data->imgu_->param_->bufferReady.connect(data.get(),
					&IPU3CameraData::paramBufferReady);
		data->imgu_->stat_->bufferReady.connect(data.get(),
					&IPU3CameraData::statBufferReady);

		/* Create and register the Camera instance. */
		const std::string &cameraId = cio2->sensor()->id();
		std::shared_ptr<Camera> camera =
			Camera::create(std::move(data), cameraId, streams);

		registerCamera(std::move(camera));

		LOG(IPU3, Info)
			<< "Registered Camera[" << numCameras << "] \""
			<< cameraId << "\""
			<< " connected to CSI-2 receiver " << id;

		numCameras++;
	}

	return numCameras ? 0 : -ENODEV;
}

int IPU3CameraData::loadIPA()
{
	ipa_ = IPAManager::createIPA<ipa::ipu3::IPAProxyIPU3>(pipe(), 1, 1);
	if (!ipa_)
		return -ENOENT;

	ipa_->setSensorControls.connect(this, &IPU3CameraData::setSensorControls);
	ipa_->paramsBufferReady.connect(this, &IPU3CameraData::paramsBufferReady);
	ipa_->metadataReady.connect(this, &IPU3CameraData::metadataReady);

	/*
	 * Pass the sensor info to the IPA to initialize controls.
	 *
	 * \todo Find a way to initialize IPA controls without basing their
	 * limits on a particular sensor mode. We currently pass sensor
	 * information corresponding to the largest sensor resolution, and the
	 * IPA uses this to compute limits for supported controls. There's a
	 * discrepancy between the need to compute IPA control limits at init
	 * time, and the fact that those limits may depend on the sensor mode.
	 * Research is required to find out to handle this issue.
	 */
	CameraSensor *sensor = cio2_.sensor();
	V4L2SubdeviceFormat sensorFormat = {};
	sensorFormat.size = sensor->resolution();
	int ret = sensor->setFormat(&sensorFormat);
	if (ret)
		return ret;

	IPACameraSensorInfo sensorInfo{};
	ret = sensor->sensorInfo(&sensorInfo);
	if (ret)
		return ret;

	/*
	 * The API tuning file is made from the sensor name. If the tuning file
	 * isn't found, fall back to the 'uncalibrated' file.
	 */
	std::string ipaTuningFile =
		ipa_->configurationFile(sensor->model() + ".yaml", "uncalibrated.yaml");

	ret = ipa_->init(IPASettings{ ipaTuningFile, sensor->model() },
			 sensorInfo, sensor->controls(), &ipaControls_);
	if (ret) {
		LOG(IPU3, Error) << "Failed to initialise the IPU3 IPA";
		return ret;
	}

	return 0;
}

void IPU3CameraData::setSensorControls([[maybe_unused]] unsigned int id,
				       const ControlList &sensorControls,
				       const ControlList &lensControls)
{
	delayedCtrls_->push(sensorControls);

	CameraLens *focusLens = cio2_.sensor()->focusLens();
	if (!focusLens)
		return;

	if (!lensControls.contains(V4L2_CID_FOCUS_ABSOLUTE))
		return;

	const ControlValue &focusValue = lensControls.get(V4L2_CID_FOCUS_ABSOLUTE);

	focusLens->setFocusPosition(focusValue.get<int32_t>());
}

void IPU3CameraData::paramsBufferReady(unsigned int id)
{
	IPU3Frames::Info *info = frameInfos_.find(id);
	if (!info)
		return;

	/* Queue all buffers from the request aimed for the ImgU. */
	for (auto it : info->request->buffers()) {
		const Stream *stream = it.first;
		FrameBuffer *outbuffer = it.second;

		if (stream == &outStream_)
			imgu_->output_->queueBuffer(outbuffer);
		else if (stream == &vfStream_)
			imgu_->viewfinder_->queueBuffer(outbuffer);
	}

	info->paramBuffer->_d()->metadata().planes()[0].bytesused =
		sizeof(struct ipu3_uapi_params);
	imgu_->param_->queueBuffer(info->paramBuffer);
	imgu_->stat_->queueBuffer(info->statBuffer);
	imgu_->input_->queueBuffer(info->rawBuffer);
}

void IPU3CameraData::metadataReady(unsigned int id, const ControlList &metadata)
{
	IPU3Frames::Info *info = frameInfos_.find(id);
	if (!info)
		return;

	Request *request = info->request;
	request->metadata().merge(metadata);

	info->metadataProcessed = true;
	if (frameInfos_.tryComplete(info))
		pipe()->completeRequest(request);
}

/* -----------------------------------------------------------------------------
 * Buffer Ready slots
 */

/**
 * \brief Handle buffers completion at the ImgU output
 * \param[in] buffer The completed buffer
 *
 * Buffers completed from the ImgU output are directed to the application.
 */
void IPU3CameraData::imguOutputBufferReady(FrameBuffer *buffer)
{
	IPU3Frames::Info *info = frameInfos_.find(buffer);
	if (!info)
		return;

	Request *request = info->request;

	pipe()->completeBuffer(request, buffer);

	request->metadata().set(controls::draft::PipelineDepth, 3);
	/* \todo Actually apply the scaler crop region to the ImgU. */
	const auto &scalerCrop = request->controls().get(controls::ScalerCrop);
	if (scalerCrop)
		cropRegion_ = *scalerCrop;
	request->metadata().set(controls::ScalerCrop, cropRegion_);

	if (frameInfos_.tryComplete(info))
		pipe()->completeRequest(request);
}

/**
 * \brief Handle buffers completion at the CIO2 output
 * \param[in] buffer The completed buffer
 *
 * Buffers completed from the CIO2 are immediately queued to the ImgU unit
 * for further processing.
 */
void IPU3CameraData::cio2BufferReady(FrameBuffer *buffer)
{
	IPU3Frames::Info *info = frameInfos_.find(buffer);
	if (!info)
		return;

	Request *request = info->request;

	/* If the buffer is cancelled force a complete of the whole request. */
	if (buffer->metadata().status == FrameMetadata::FrameCancelled) {
		for (auto it : request->buffers()) {
			FrameBuffer *b = it.second;
			b->_d()->cancel();
			pipe()->completeBuffer(request, b);
		}

		frameInfos_.remove(info);
		pipe()->completeRequest(request);
		return;
	}

	/*
	 * Record the sensor's timestamp in the request metadata.
	 *
	 * \todo The sensor timestamp should be better estimated by connecting
	 * to the V4L2Device::frameStart signal.
	 */
	request->metadata().set(controls::SensorTimestamp,
				buffer->metadata().timestamp);

	info->effectiveSensorControls = delayedCtrls_->get(buffer->metadata().sequence);

	if (request->findBuffer(&rawStream_))
		pipe()->completeBuffer(request, buffer);

	ipa_->fillParamsBuffer(info->id, info->paramBuffer->cookie());
}

void IPU3CameraData::paramBufferReady(FrameBuffer *buffer)
{
	IPU3Frames::Info *info = frameInfos_.find(buffer);
	if (!info)
		return;

	info->paramDequeued = true;

	/*
	 * tryComplete() will delete info if it completes the IPU3Frame.
	 * In that event, we must have obtained the Request before hand.
	 *
	 * \todo Improve the FrameInfo API to avoid this type of issue
	 */
	Request *request = info->request;

	if (frameInfos_.tryComplete(info))
		pipe()->completeRequest(request);
}

void IPU3CameraData::statBufferReady(FrameBuffer *buffer)
{
	IPU3Frames::Info *info = frameInfos_.find(buffer);
	if (!info)
		return;

	Request *request = info->request;

	if (buffer->metadata().status == FrameMetadata::FrameCancelled) {
		info->metadataProcessed = true;

		/*
		 * tryComplete() will delete info if it completes the IPU3Frame.
		 * In that event, we must have obtained the Request before hand.
		 */
		if (frameInfos_.tryComplete(info))
			pipe()->completeRequest(request);

		return;
	}

	ipa_->processStatsBuffer(info->id, request->metadata().get(controls::SensorTimestamp).value_or(0),
				 info->statBuffer->cookie(), info->effectiveSensorControls);
}

/*
 * \brief Handle the start of frame exposure signal
 * \param[in] sequence The sequence number of frame
 *
 * Inspect the list of pending requests waiting for a RAW frame to be
 * produced and apply controls for the 'next' one.
 *
 * Some controls need to be applied immediately, such as the
 * TestPatternMode one. Other controls are handled through the delayed
 * controls class.
 */
void IPU3CameraData::frameStart(uint32_t sequence)
{
	delayedCtrls_->applyControls(sequence);

	if (processingRequests_.empty())
		return;

	/*
	 * Handle controls to be set immediately on the next frame.
	 * This currently only handle the TestPatternMode control.
	 *
	 * \todo Synchronize with the sequence number
	 */
	Request *request = processingRequests_.front();
	processingRequests_.pop();

	const auto &testPatternMode = request->controls().get(controls::draft::TestPatternMode);
	if (!testPatternMode)
		return;

	int ret = cio2_.sensor()->setTestPatternMode(
		static_cast<controls::draft::TestPatternModeEnum>(*testPatternMode));
	if (ret) {
		LOG(IPU3, Error) << "Failed to set test pattern mode: "
				 << ret;
		return;
	}

	request->metadata().set(controls::draft::TestPatternMode,
				*testPatternMode);
}

REGISTER_PIPELINE_HANDLER(PipelineHandlerIPU3, "ipu3")

} /* namespace libcamera */