summaryrefslogtreecommitdiff
path: root/Documentation/guides/application-developer.rst
blob: ce093cbeaafae0c210c0277381460488a6958850 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
.. SPDX-License-Identifier: CC-BY-SA-4.0

Using libcamera in a C++ application
====================================

This tutorial shows how to create a C++ application that uses libcamera to
interface with a camera on a system, capture frames from it for 3 seconds, and
write metadata about the frames to standard out.

Application skeleton
--------------------

Most of the code in this tutorial runs in the ``int main()`` function
with a separate global function to handle events. The two functions need
to share data, which are stored in global variables for simplicity. A
production-ready application would organize the various objects created
in classes, and the event handler would be a class member function to
provide context data without requiring global variables.

Use the following code snippets as the initial application skeleton.
It already lists all the necessary includes directives and instructs the
compiler to use the libcamera namespace, which gives access to the libcamera
defined names and types without the need of prefixing them.

.. code:: cpp

   #include <iomanip>
   #include <iostream>
   #include <memory>

   #include <libcamera/libcamera.h>

   using namespace libcamera;

   int main()
   {
       // Code to follow

       return 0;
   }

Camera Manager
--------------

Every libcamera-based application needs an instance of a `CameraManager`_ that
runs for the life of the application. When the Camera Manager starts, it
enumerates all the cameras detected in the system. Behind the scenes, libcamera
abstracts and manages the complex pipelines that kernel drivers expose through
the `Linux Media Controller`_ and `Video for Linux`_ (V4L2) APIs, meaning that
an application doesn’t need to handle device or driver specific details.

.. _CameraManager: http://libcamera.org/api-html/classlibcamera_1_1CameraManager.html
.. _Linux Media Controller: https://www.kernel.org/doc/html/latest/media/uapi/mediactl/media-controller-intro.html
.. _Video for Linux: https://www.linuxtv.org/docs.php

Before the ``int main()`` function, create a global shared pointer
variable for the camera to support the event call back later:

.. code:: cpp

   std::shared_ptr<Camera> camera;

Create a Camera Manager instance at the beginning of the main function, and then
start it. An application must only create a single Camera Manager instance.

The CameraManager can be stored in a unique_ptr to automate deleting the
instance when it is no longer used, but care must be taken to ensure all
cameras are released explicitly before this happens.

.. code:: cpp

   std::unique_ptr<CameraManager> cm = std::make_unique<CameraManager>();
   cm->start();

During the application initialization, the Camera Manager is started to
enumerate all the supported devices and create cameras that the application can
interact with.

Once the camera manager is started, we can use it to iterate the available
cameras in the system:

.. code:: cpp

   for (auto const &camera : cm->cameras())
       std::cout << camera->id() << std::endl;

Printing the camera id lists the machine-readable unique identifiers, so for
example, the output on a Linux machine with a connected USB webcam is
``\_SB_.PCI0.XHC_.RHUB.HS08-8:1.0-5986:2115``.

What libcamera considers a camera
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

The libcamera library considers any unique source of video frames, which usually
correspond to a camera sensor, as a single camera device. Camera devices expose
streams, which are obtained by processing data from the single image source and
all share some basic properties such as the frame duration and the image
exposure time, as they only depend by the image source configuration.

Applications select one or multiple Camera devices they wish to operate on, and
require frames from at least one of their Streams.

Create and acquire a camera
---------------------------

This example application uses a single camera (the first enumerated one) that
the Camera Manager reports as available to applications.

Camera devices are stored by the CameraManager in a list accessible by index, or
can be retrieved by name through the ``CameraManager::get()`` function. The
code below retrieves the name of the first available camera and gets the camera
by name from the Camera Manager.

.. code:: cpp

   std::string cameraId = cm->cameras()[0]->id();
   camera = cm->get(cameraId);

   /*
    * Note that is equivalent to:
    * camera = cm->cameras()[0];
    */

Once a camera has been selected an application needs to acquire an exclusive
lock to it so no other application can use it.

.. code:: cpp

   camera->acquire();

Configure the camera
--------------------

Before the application can do anything with the camera, it needs to configure
the image format and sizes of the streams it wants to capture frames from.

Stream configurations are represented by instances of the
``StreamConfiguration`` class, which are grouped together in a
``CameraConfiguration`` object. Before an application can start setting its
desired configuration, a ``CameraConfiguration`` instance needs to be generated
from the ``Camera`` device using the ``Camera::generateConfiguration()``
function.

The libcamera library uses the ``StreamRole`` enumeration to define predefined
ways an application intends to use a camera. The
``Camera::generateConfiguration()`` function accepts a list of desired roles and
generates a ``CameraConfiguration`` with the best stream parameters
configuration for each of the requested roles.  If the camera can handle the
requested roles, it returns an initialized ``CameraConfiguration`` and a null
pointer if it can't.

It is possible for applications to generate an empty ``CameraConfiguration``
instance by not providing any role. The desired configuration will have to be
filled-in manually and manually validated.

In the example application, create a new configuration variable and use the
``Camera::generateConfiguration`` function to produce a ``CameraConfiguration``
for the single ``StreamRole::Viewfinder`` role.

.. code:: cpp

   std::unique_ptr<CameraConfiguration> config = camera->generateConfiguration( { StreamRole::Viewfinder } );

The generated ``CameraConfiguration`` has a ``StreamConfiguration`` instance for
each ``StreamRole`` the application requested. Each of these has a default size
and format that the camera assigned, and a list of supported pixel formats and
sizes.

The code below accesses the first and only ``StreamConfiguration`` item in the
``CameraConfiguration`` and outputs its parameters to standard output.

.. code:: cpp

   StreamConfiguration &streamConfig = config->at(0);
   std::cout << "Default viewfinder configuration is: " << streamConfig.toString() << std::endl;

This is expected to output something like:

   ``Default viewfinder configuration is: 1280x720-MJPEG``

Change and validate the configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

With an initialized ``CameraConfiguration``, an application can make changes to
the parameters it contains, for example, to change the width and height, use the
following code:

.. code:: cpp

   streamConfig.size.width = 640;
   streamConfig.size.height = 480;

If an application changes any parameters, it must validate the configuration
before applying it to the camera using the ``CameraConfiguration::validate()``
function. If the new values are not supported by the ``Camera`` device, the
validation process adjusts the parameters to what it considers to be the closest
supported values.

The ``validate`` function returns a `Status`_ which applications shall check to
see if the Pipeline Handler adjusted the configuration.

.. _Status: http://libcamera.org/api-html/classlibcamera_1_1CameraConfiguration.html#a64163f21db2fe1ce0a6af5a6f6847744

For example, the code above set the width and height to 640x480, but if the
camera cannot produce an image that large, it might adjust the configuration to
the supported size of 320x240 and return ``Adjusted`` as validation status
result.

If the configuration to validate cannot be adjusted to a set of supported
values, the validation procedure fails and returns the ``Invalid`` status.

For this example application, the code below prints the adjusted values to
standard out.

.. code:: cpp

   config->validate();
   std::cout << "Validated viewfinder configuration is: " << streamConfig.toString() << std::endl;

For example, the output might be something like

   ``Validated viewfinder configuration is: 320x240-MJPEG``

A validated ``CameraConfiguration`` can bet given to the ``Camera`` device to be
applied to the system.

.. code:: cpp

   camera->configure(config.get());

If an application doesn’t first validate the configuration before calling
``Camera::configure()``, there’s a chance that calling the function can fail, if
the given configuration would have to be adjusted.

Allocate FrameBuffers
---------------------

An application needs to reserve the memory that libcamera can write incoming
frames and data to, and that the application can then read. The libcamera
library uses ``FrameBuffer`` instances to represent memory buffers allocated in
memory. An application should reserve enough memory for the frame size the
streams need based on the configured image sizes and formats.

The libcamera library consumes buffers provided by applications as
``FrameBuffer`` instances, which makes libcamera a consumer of buffers exported
by other devices (such as displays or video encoders), or allocated from an
external allocator (such as ION on Android).

In some situations, applications do not have any means to allocate or get hold
of suitable buffers, for instance, when no other device is involved, or on Linux
platforms that lack a centralized allocator. The ``FrameBufferAllocator`` class
provides a buffer allocator an application can use in these situations.

An application doesn’t have to use the default ``FrameBufferAllocator`` that
libcamera provides. It can instead allocate memory manually and pass the buffers
in ``Request``\s (read more about ``Request`` in `the frame capture section
<#frame-capture>`_ of this guide). The example in this guide covers using the
``FrameBufferAllocator`` that libcamera provides.

Using the libcamera ``FrameBufferAllocator``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Applications create a ``FrameBufferAllocator`` for a Camera and use it
to allocate buffers for streams of a ``CameraConfiguration`` with the
``allocate()`` function.

The list of allocated buffers can be retrieved using the ``Stream`` instance
as the parameter of the ``FrameBufferAllocator::buffers()`` function.

.. code:: cpp

   FrameBufferAllocator *allocator = new FrameBufferAllocator(camera);

   for (StreamConfiguration &cfg : *config) {
       int ret = allocator->allocate(cfg.stream());
       if (ret < 0) {
           std::cerr << "Can't allocate buffers" << std::endl;
           return -ENOMEM;
       }

       unsigned int allocated = allocator->buffers(cfg.stream()).size();
       std::cout << "Allocated " << allocated << " buffers for stream" << std::endl;
   }

Frame Capture
~~~~~~~~~~~~~

The libcamera library implements a streaming model based on per-frame requests.
For each frame an application wants to capture it must queue a request for it to
the camera. With libcamera, a ``Request`` is at least one ``Stream`` associated
with a ``FrameBuffer`` representing the memory location where frames have to be
stored.

First, by using the ``Stream`` instance associated to each
``StreamConfiguration``, retrieve the list of ``FrameBuffer``\s created for it
using the frame allocator. Then create a vector of requests to be submitted to
the camera.

.. code:: cpp

   Stream *stream = streamConfig.stream();
   const std::vector<std::unique_ptr<FrameBuffer>> &buffers = allocator->buffers(stream);
   std::vector<Request *> requests;

Proceed to fill the request vector by creating ``Request`` instances from the
camera device, and associate a buffer for each of them for the ``Stream``.

.. code:: cpp

       for (unsigned int i = 0; i < buffers.size(); ++i) {
           Request *request = camera->createRequest();
           if (!request)
           {
               std::cerr << "Can't create request" << std::endl;
               return -ENOMEM;
           }

           const std::unique_ptr<FrameBuffer> &buffer = buffers[i];
           int ret = request->addBuffer(stream, buffer.get());
           if (ret < 0)
           {
               std::cerr << "Can't set buffer for request"
                     << std::endl;
               return ret;
           }

           requests.push_back(request);
       }

.. TODO: Controls

.. TODO: A request can also have controls or parameters that you can apply to the image.

Event handling and callbacks
----------------------------

The libcamera library uses the concept of `signals and slots` (similar to `Qt
Signals and Slots`_) to connect events with callbacks to handle them.

.. _signals and slots: http://libcamera.org/api-html/classlibcamera_1_1Signal.html#details
.. _Qt Signals and Slots: https://doc.qt.io/qt-5/signalsandslots.html

The ``Camera`` device emits two signals that applications can connect to in
order to execute callbacks on frame completion events.

The ``Camera::bufferCompleted`` signal notifies applications that a buffer with
image data is available. Receiving notifications about the single buffer
completion event allows applications to implement partial request completion
support, and to inspect the buffer content before the request it is part of has
fully completed.

The ``Camera::requestCompleted`` signal notifies applications that a request
has completed, which means all the buffers the request contains have now
completed. Request completion notifications are always emitted in the same order
as the requests have been queued to the camera.

To receive the signals emission notifications, connect a slot function to the
signal to handle it in the application code.

.. code:: cpp

   camera->requestCompleted.connect(requestComplete);

For this example application, only the ``Camera::requestCompleted`` signal gets
handled and the matching ``requestComplete`` slot function outputs information
about the FrameBuffer to standard output. This callback is typically where an
application accesses the image data from the camera and does something with it.

Signals operate in the libcamera ``CameraManager`` thread context, so it is
important not to block the thread for a long time, as this blocks internal
processing of the camera pipelines, and can affect realtime performance.

Handle request completion events
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Create the ``requestComplete`` function by matching the slot signature:

.. code:: cpp

   static void requestComplete(Request *request)
   {
       // Code to follow
   }

Request completion events can be emitted for requests which have been canceled,
for example, by unexpected application shutdown. To avoid an application
processing invalid image data, it’s worth checking that the request has
completed successfully. The list of request completion statuses is available in
the `Request::Status`_ class enum documentation.

.. _Request::Status: https://www.libcamera.org/api-html/classlibcamera_1_1Request.html#a2209ba8d51af8167b25f6e3e94d5c45b

.. code:: cpp

   if (request->status() == Request::RequestCancelled)
      return;

If the ``Request`` has completed successfully, applications can access the
completed buffers using the ``Request::buffers()`` function, which returns a map
of ``FrameBuffer`` instances associated with the ``Stream`` that produced the
images.

.. code:: cpp

   const std::map<Stream *, FrameBuffer *> &buffers = request->buffers();

Iterating through the map allows applications to inspect each completed buffer
in this request, and access the metadata associated to each frame.

The metadata buffer contains information such the capture status, a timestamp,
and the bytes used, as described in the `FrameMetadata`_ documentation.

.. _FrameMetaData: http://libcamera.org/api-html/structlibcamera_1_1FrameMetadata.html

.. code:: cpp

   for (auto bufferPair : buffers) {
       FrameBuffer *buffer = bufferPair.second;
       const FrameMetadata &metadata = buffer->metadata();
   }

For this example application, inside the ``for`` loop from above, we ca print
the Frame sequence number and details of the planes.

.. code:: cpp

   std::cout << " seq: " << std::setw(6) << std::setfill('0') << metadata.sequence << " bytesused: ";

   unsigned int nplane = 0;
   for (const FrameMetadata::Plane &plane : metadata.planes)
   {
       std::cout << plane.bytesused;
       if (++nplane < metadata.planes.size()) std::cout << "/";
   }

   std::cout << std::endl;

The expected output shows each monotonically increasing frame sequence number
and the bytes used by planes.

.. code:: text

   seq: 000000 bytesused: 1843200
   seq: 000002 bytesused: 1843200
   seq: 000004 bytesused: 1843200
   seq: 000006 bytesused: 1843200
   seq: 000008 bytesused: 1843200
   seq: 000010 bytesused: 1843200
   seq: 000012 bytesused: 1843200
   seq: 000014 bytesused: 1843200
   seq: 000016 bytesused: 1843200
   seq: 000018 bytesused: 1843200
   seq: 000020 bytesused: 1843200
   seq: 000022 bytesused: 1843200
   seq: 000024 bytesused: 1843200
   seq: 000026 bytesused: 1843200
   seq: 000028 bytesused: 1843200
   seq: 000030 bytesused: 1843200
   seq: 000032 bytesused: 1843200
   seq: 000034 bytesused: 1843200
   seq: 000036 bytesused: 1843200
   seq: 000038 bytesused: 1843200
   seq: 000040 bytesused: 1843200
   seq: 000042 bytesused: 1843200

A completed buffer contains of course image data which can be accessed through
the per-plane dma-buf file descriptor transported by the ``FrameBuffer``
instance. An example of how to write image data to disk is available in the
`BufferWriter class`_ which is a part of the ``cam`` utility application in the
libcamera repository.

.. _BufferWriter class: https://git.linuxtv.org/libcamera.git/tree/src/cam/buffer_writer.cpp

With the handling of this request completed, it is possible to re-use the
buffers by adding them to a new ``Request`` instance with their matching
streams, and finally, queue the new capture request to the camera device:

.. code:: cpp

   request = camera->createRequest();
   if (!request)
   {
       std::cerr << "Can't create request" << std::endl;
       return;
   }

   for (auto it = buffers.begin(); it != buffers.end(); ++it)
   {
       Stream *stream = it->first;
       FrameBuffer *buffer = it->second;

       request->addBuffer(stream, buffer);
   }

   camera->queueRequest(request);

Request queueing
----------------

The ``Camera`` device is now ready to receive frame capture requests and
actually start delivering frames. In order to prepare for that, an application
needs to first start the camera, and queue requests to it for them to be
processed.

In the main() function, just after having connected the
``Camera::requestCompleted`` signal to the callback handler, start the camera
and queue all the previously created requests.

.. code:: cpp

   camera->start();
   for (Request *request : requests)
       camera->queueRequest(request);

Start an event loop
~~~~~~~~~~~~~~~~~~~

The libcamera library needs an event loop to monitor and dispatch events
generated by the video devices part of the capture pipeline. libcamera provides
its own ``EventDispatcher`` class (inspired by the `Qt event system`_) to
process and deliver events generated by ``EventNotifiers``.

.. _Qt event system: https://doc.qt.io/qt-5/eventsandfilters.html

The libcamera library implements this by creating instances of the
``EventNotifier`` class, which models a file descriptor event source registered
to an ``EventDispatcher``. Whenever the ``EventDispatcher`` detects an event on
a notifier it is monitoring, it emits the notifier's
``EventNotifier::activated`` signal. The libcamera components connect to the
notifiers' signals and emit application visible events, such as the
``Camera::bufferReady`` and ``Camera::requestCompleted`` signals.

The code below retrieves a reference to the system-wide event dispatcher and for
the a fixed duration of 3 seconds, processes all the events detected in the
system.

.. code:: cpp

   EventDispatcher *dispatcher = cm->eventDispatcher();
   Timer timer;
   timer.start(3000);
   while (timer.isRunning())
       dispatcher->processEvents();

Clean up and stop the application
---------------------------------

The application is now finished with the camera and the resources the camera
uses, so needs to do the following:

-  stop the camera
-  free the buffers in the FrameBufferAllocator and delete it
-  release the lock on the camera and reset the pointer to it
-  stop the camera manager

.. code:: cpp

   camera->stop();
   allocator->free(stream);
   delete allocator;
   camera->release();
   camera.reset();
   cm->stop();

   return 0;

In this instance the CameraManager will automatically be deleted by the
unique_ptr implementation when it goes out of scope.

Build and run instructions
--------------------------

To build the application, we recommend that you use the `Meson build system`_
which is also the official build system of the libcamera library.

Make sure both ``meson`` and ``libcamera`` are installed in your system. Please
refer to your distribution documentation to install meson and install the most
recent version of libcamera from the git repository at `Linux TV`_. You would
also need to install the ``pkg-config`` tool to correctly identify the
libcamera.so object install location in the system.

.. _Meson build system: https://mesonbuild.com/
.. _Linux TV: https://git.linuxtv.org/libcamera.git/

Dependencies
~~~~~~~~~~~~

The test application presented here depends on the libcamera library to be
available in a path that meson can identify. The libcamera install procedure
performed using the ``ninja install`` command may by default deploy the
libcamera components in the ``/usr/local/lib`` path, or a package manager may
install it to ``/usr/lib`` depending on your distribution. If meson is unable to
find the location of the libcamera installation, you may need to instruct meson
to look into a specific path when searching for ``libcamera.so`` by setting the
``PKG_CONFIG_PATH`` environment variable to the right location.

Adjust the following command to use the ``pkgconfig`` directory where libcamera
has been installed in your system.

.. code:: shell

   export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig/

Verify that ``pkg-config`` can identify the ``libcamera`` library with

.. code:: shell

   $ pkg-config --libs --cflags libcamera
     -I/usr/local/include/libcamera -L/usr/local/lib -lcamera -lcamera-base

``meson`` can alternatively use ``cmake`` to locate packages, please refer to
the ``meson`` documentation if you prefer to use it in place of ``pkgconfig``

Build file
~~~~~~~~~~

With the dependencies correctly identified, prepare a ``meson.build`` build file
to be placed in the same directory where the application lives. You can
name your application as you like, but be sure to update the following snippet
accordingly. In this example, the application file has been named
``simple-cam.cpp``.

.. code::

   project('simple-cam', 'cpp')

   simple_cam = executable('simple-cam',
       'simple-cam.cpp',
       dependencies: dependency('libcamera', required : true))

The ``dependencies`` line instructs meson to ask ``pkgconfig`` (or ``cmake``) to
locate the ``libcamera`` library,  which the test application will be
dynamically linked against.

With the build file in place, compile and run the application with:

.. code:: shell

   $ meson build
   $ cd build
   $ ninja
   $ ./simple-cam

It is possible to increase the library debug output by using environment
variables which control the library log filtering system:

.. code:: shell

   $ LIBCAMERA_LOG_LEVELS=0 ./simple-cam