Tutorial

Units of measurement

Many of the metadata interfaces provide methods to get or set values with an associated unit of measurement. The reason for this is to ensure that values are always associated with an appropriate unit, and to enforce compile-time or run-time sanity checks to ensure that the correct unit type is used, and that any unit conversions performed are legal. The following terminology is used:

dimension
A measured property, for example length, pressure, temperature or time.
unit system
A system of units for a given dimension, for example the SI units for the length dimension are metre and its derived units. For the pressure, temperature and time dimensions, pascal, celsius and second are used respectively, along with any derived units. Multiple systems may be provided for a given dimension, such as Imperial length units, bar or Torr for pressure or Fahrenheit for temperature. Different unit systems for the same dimension will typically be inter-convertible, but this is not a requirement.
unit
A unit of measure within a given unit system, for example cm, µm and nm are all scaled units derived from m.
base unit
The primary unit for a given unit system; all other units are scaled relative to this unit. Automatic conversion between unit systems is defined in conversion of base units. For example, m is the base unit for the SI length unit system.
quantity
A measured value with an associated unit. For example, 3.5 mm.

Model units

The metadata interfaces make use of these unit types, which are based upon

The Quantity class is the user-visible part of the units support. It trades compile-time correctness for run-time flexibility. It is templated, specialized for a given unit type enumeration, for example Quantity<UnitLength> for length quantities. It may represent any valid unit from the enumeration.

A Quantity is constructed using a numerical value and a unit enumeration value. Arithmetic operations are not currently supported.

Model units are also used in the following examples such as the Metadata store.

Metadata

OME-Files supports several different classes of metadata, from very basic information about the image dimensions and pixel type to detailed information about the acquisition hardware and experimental parameters. From simplest to most complex, these are:

Core metadata
Basic information describing an individual 5D image (series), including dimension sizes, dimension order and pixel type
Original metadata
Key-value pairs describing metadata from the original file format for the image. Two forms exist: global metadata for an entire dataset (image collection) and series metadata for an individual 5D image
Metadata store
A container for all image metadata providing interfaces to get and set individual metadata values. This is a superset of the core and original metadata content (it can represent all values contained within the core and original metadata). It is an alternative representation of the OME-XML data model objects, and is used by the OME-Files reader and writer interfaces.
OME-XML data model objects
The abstract OME-XML data model is realized as a collection of model objects. Classes are generated from the elements of the OME-XML data model schema, and a tree of the model objects acts as a representation of the OME data model which may be modified and manipulated. The model objects may be created from an OME-XML text document, and vice versa.

For the simplest cases of reading and writing image data, the core metadata interface will likely be sufficient. If specific individual parameters from the original file format are needed, then original metadata may also be useful. For more advanced processing and rendering, the metadata store should be the next source of information, for example to get information about the image scale, stage position, instrument setup including light sources, light paths, detectors etc., and access to plate/well information, regions of interest etc. Direct access to the OME-XML data model objects is an alternative to the metadata store, but is more difficult to use; certain modifications to the data model may only be made via direct access to the model objects, otherwise the higher-level metadata store interface should be preferred.

The header file ome/files/MetadataTools.h provides several convenience functions to work with and manipulate the various forms of metadata, including conversion of Core metadata to and from a metadata store.

Core metadata

Core metadata is accessible through the getter methods in the FormatReader interface. These operate on the current series, set using the setSeries() method. The CoreMetadata objects are also accessible directly using the getCoreMetadataList method. The FormatReader interface should be preferred; the objects themselves are more of an implementation detail at present.

  void
  readMetadata(const FormatReader& reader,
               std::ostream&       stream)
  {
    // Get total number of images (series)
    dimension_size_type ic = reader.getSeriesCount();
    stream << "Image count: " << ic << '\n';

    // Loop over images
    for (dimension_size_type i = 0 ; i < ic; ++i)
      {
        // Change the current series to this index
        reader.setSeries(i);

        // Print image dimensions (for this image index)
        stream << "Dimensions for Image " << i << ':'
               << "\n\tX = " << reader.getSizeX()
               << "\n\tY = " << reader.getSizeY()
               << "\n\tZ = " << reader.getSizeZ()
               << "\n\tT = " << reader.getSizeT()
               << "\n\tC = " << reader.getSizeC()
               << "\n\tEffectiveC = " << reader.getEffectiveSizeC();
        for (dimension_size_type channel = 0;
             channel < reader.getEffectiveSizeC();
             ++channel)
          {
            stream << "\n\tChannel " << channel << ':'
                   << "\n\t\tRGB = " << (reader.isRGB(channel) ? "true" : "false")
                   << "\n\t\tRGBC = " << reader.getRGBChannelCount(channel);
          }
        stream << '\n';

        // Get total number of planes (for this image index)
        dimension_size_type pc = reader.getImageCount();
        stream << "\tPlane count: " << pc << '\n';

        // Loop over planes (for this image index)
        for (dimension_size_type p = 0 ; p < pc; ++p)
          {
            // Print plane position (for this image index and plane
            // index)
            std::array<dimension_size_type, 3> coords =
              reader.getZCTCoords(p);
            stream << "\tPosition of Plane " << p << ':'
                   << "\n\t\tTheZ = " << coords[0]
                   << "\n\t\tTheT = " << coords[2]
                   << "\n\t\tTheC = " << coords[1]
                   << '\n';
          }
      }
  }

If implementing a reader, it is fairly typical to set the basic image metadata in CoreMetadata objects, and then use the fillMetadata() function in ome/files/MetadataTools.h to fill the reader’s metadata store with this information, before filling the metadata store with additional (non-core) metadata as required. When writing an image, a metadata store is required in order to provide the writer with all the metadata needed to write an image. If the metadata store was not already obtained from a reader, fillMetadata() may also be used in this situation to create a suitable metadata store:

    // OME-XML metadata store.
    auto meta = make_shared<OMEXMLMetadata>();

    // Create simple CoreMetadata and use this to set up the OME-XML
    // metadata.  This is purely for convenience in this example; a
    // real writer would typically set up the OME-XML metadata from an
    // existing MetadataRetrieve instance or by hand.
    std::vector<shared_ptr<CoreMetadata>> seriesList;
    shared_ptr<CoreMetadata> core(make_shared<CoreMetadata>());
    core->sizeX = 512U;
    core->sizeY = 512U;
    core->sizeC.clear(); // defaults to 1 channel with 1 sample; clear this
    core->sizeC.push_back(3U); // replace with single RGB channel
    core->pixelType = ome::xml::model::enums::PixelType::UINT16;
    core->interleaved = false;
    core->bitsPerPixel = 12U;
    core->dimensionOrder = DimensionOrder::XYZTC;
    seriesList.push_back(core);
    seriesList.push_back(core); // add two identical series

    fillMetadata(*meta, seriesList);

Full example source: metadata-formatreader.cpp, metadata-formatreader.cpp

Original metadata

Original metadata is stored in two forms: in a MetadataMap which is accessible through the FormatReader interface, which offers access to individual keys and the whole map for both global and series metadata. It is also accessible using the metadata store; original metadata is stored as an XMLAnnotation. The following example demonstrates access to the global and series metadata using the FormatReader interface to get access to the maps:

  void
  readOriginalMetadata(const FormatReader& reader,
                       std::ostream&       stream)
  {
    // Get total number of images (series)
    dimension_size_type ic = reader.getSeriesCount();
    stream << "Image count: " << ic << '\n';

    // Get global metadata
    const MetadataMap& global = reader.getGlobalMetadata();

    // Print global metadata
    stream << "Global metadata:\n" << global << '\n';

    // Loop over images
    for (dimension_size_type i = 0 ; i < ic; ++i)
      {
        // Change the current series to this index
        reader.setSeries(i);

        // Print series metadata
        const MetadataMap& series = reader.getSeriesMetadata();

        // Print image dimensions (for this image index)
        stream << "Metadata for Image " << i << ":\n"
               << series
               << '\n';
      }
  }

It would also be possible to use getMetadataValue() and getSeriesMetadataValue() to obtain values for individual keys. Note that the MetadataMap values can be scalar values or lists of scalar values; call the flatten() method to split the lists into separate key-value pairs with a numbered suffix.

Full example source: metadata-formatreader.cpp

Metadata store

Access to metadata is provided via the MetadataStore and MetadataRetrieve interfaces. These provide setters and getters, respectively, to store and retrieve metadata to and from an underlying abstract metadata store. The primary store is the OMEXMLMetadata which stores the metadata in OME-XML data model objects (see below), and implements both interfaces. However, other storage classes are available, and may be used to filter the stored metadata, combine different stores, or do nothing at all. Additional storage backends could also be implemented, for example to allow metadata retrieval from a relational database, or JSON/YAML.

When using OMEXMLMetadata the convenience function createOMEXMLMetadata() is the recommended method for creating a new instance and then filling it with the content from an OME-XML document. This is overloaded to allow the OME-XML to be obtained from various sources. For example, from a file:

    // Create metadata directly from file
    shared_ptr<meta::OMEXMLMetadata> filemeta(createOMEXMLMetadata(filename));

Alternatively from a DOM tree:

#ifdef OME_HAVE_XERCES_DOM
    // XML platform (required by Xerces)
    ome::common::xml::Platform xmlplat;
#endif
    // XML DOM tree containing parsed file content
    ome::xml::DOMDocument inputdoc(ome::xml::createDocument(filename));
    // Create metadata from DOM document
    shared_ptr<meta::OMEXMLMetadata> dommeta(createOMEXMLMetadata(inputdoc));

The convenience function getOMEXML() may be used to reverse the process, i.e. obtain an OME-XML document from the store. Note the use of convert(). Only the OMEXMLMetadata class can dump an OME-XML document, therefore if the source of the data is another class implementing the MetadataRetrieve interface, the stored data will need to be copied into an OMEXMLMetadata instance first.

    meta::OMEXMLMetadata *omexmlmeta = dynamic_cast<meta::OMEXMLMetadata *>(&meta);
    shared_ptr<meta::OMEXMLMetadata> convertmeta;
    if (!omexmlmeta)
      {
        convertmeta = make_shared<meta::OMEXMLMetadata>();
        meta::convert(meta, *convertmeta);
        omexmlmeta = &*convertmeta;
      }
    // Get OME-XML text from metadata store (and validate it)
    bool validate = true;
#ifdef OME_HAVE_QT6_DOM
    validate = false;
#endif
    std::string omexml(getOMEXML(*omexmlmeta, validate));

Conceptually, the metadata store contains lists of objects, accessed by index (insertion order). In the example below, getImageCount() method is used to find the number of images. This is then used to safely loop through each of the available images. Each of the getPixelsSizeA() methods takes the image index as its only argument. Internally, this is used to find the Image model object for the specified index, and then call the getSizeA() method on that object and return the result. Since objects can contain other objects, some accessor methods require the use of more than one index. For example, an Image object can contain multiple Plane objects. Similar to the above example, there is a getPlaneCount() method, however since it is contained by an Image it has an additional image index argument to get the plane count for the specified image. Likewise its accessors such as getPlaneTheZ() take two arguments, the image index and the plane index. Internally, these indices will be used to find the Image, then the Plane, and then call getTheZ(). When using the MetadataRetrieve interface with an OMEXMLMetadata store, the methods are simply a shorthand for navigating through the tree of model objects.

  void
  queryMetadata(const meta::MetadataRetrieve& meta,
                const std::string&            state,
                std::ostream&                 stream)
  {
    // Get total number of images (series)
    index_type ic = meta.getImageCount();
    stream << "Image count: " << ic << '\n';

    // Loop over images
    for (index_type i = 0 ; i < ic; ++i)
      {
        // Print image dimensions (for this image index)
        stream << "Dimensions for Image " << i << ' ' << state << ':'
               << "\n\tX = " << meta.getPixelsSizeX(i)
               << "\n\tY = " << meta.getPixelsSizeY(i)
               << "\n\tZ = " << meta.getPixelsSizeZ(i)
               << "\n\tT = " << meta.getPixelsSizeT(i)
               << "\n\tC = " << meta.getPixelsSizeC(i);
        // These are optional, so handle failure gracefully.
        try
          {
            stream << "\n\tPhysicalX = " << meta.getPixelsPhysicalSizeX(i);
          }
        catch (const meta::MetadataException&)
          {
          }
        try
          {
            stream << "\n\tPhysicalY = " << meta.getPixelsPhysicalSizeY(i);
          }
        catch (const meta::MetadataException&)
          {
          }
        try
          {
            stream << "\n\tPhysicalZ = " << meta.getPixelsPhysicalSizeZ(i);
          }
        catch (const meta::MetadataException&)
          {
          }
        stream<< '\n';

        // Get total number of planes (for this image index)
        index_type pc = meta.getPlaneCount(i);
        stream << "\tPlane count: " << pc << '\n';

        // Loop over planes (for this image index)
        for (index_type p = 0 ; p < pc; ++p)
          {
            // Print plane position (for this image index and plane
            // index)
            stream << "\tPosition of Plane " << p << ':'
                   << "\n\t\tTheZ = " << meta.getPlaneTheZ(i, p)
                   << "\n\t\tTheT = " << meta.getPlaneTheT(i, p)
                   << "\n\t\tTheC = " << meta.getPlaneTheC(i, p)
                   << '\n';
          }
      }
  }

The methods for storing data using the MetadataStore interface are similar. The set methods use the same indices as the get methods, with the value to set as an additional initial argument. The following example demonstrates how to update dimension sizes for images in the store:

  void
  updateMetadata(meta::Metadata& meta)
  {
    // Get total number of images (series)
    index_type ic = meta.getImageCount();

    // Loop over images
    for (index_type i = 0 ; i < ic; ++i)
      {
        // Change image dimensions (for this image index)
        meta.setPixelsSizeX(12, i);
        meta.setPixelsSizeY(24, i);
        meta.setPixelsSizeZ(6, i);
        meta.setPixelsSizeT(30, i);
        meta.setPixelsSizeC(4, i);
        meta.setPixelsPhysicalSizeX
          (PositiveLength(118.2, model::enums::UnitsLength::MICROMETER), i);
        meta.setPixelsPhysicalSizeY
          (PositiveLength(118.2, model::enums::UnitsLength::MICROMETER), i);
        meta.setPixelsPhysicalSizeZ
          (PositiveLength(26.5, model::enums::UnitsLength::MICROMETER), i);
      }
  }

When adding new objects to the store, as opposed to updating existing ones, some additional considerations apply. An new object is added to the store if the object corresponding to an index does not exist and the index is the current object count (i.e. one past the end of the last valid index). Note that for data model objects with a setID() method, this method alone will trigger insertion and must be called first, before any other methods which modify the object. The following example demonstrates the addition of a new Image to the store, plus contained Plane objects.

  void
  addMetadata(meta::Metadata& meta)
  {
    // Get total number of images (series)
    index_type i = meta.getImageCount();

    // Size of Z, T and C dimensions
    index_type nz = 3;
    index_type nt = 1;
    index_type nc = 4;

    // Create new image; the image index is the same as the image
    // count, i.e. one past the end of the current limit; createID
    // creates a unique identifier for the image
    meta.setImageID(createID("Image", i), i);
    // Set Pixels identifier using createID and the same image index
    meta.setPixelsID(createID("Pixels", i), i);
    // Now set the dimension order, pixel type and dimension sizes for
    // this image, using the same image index
    meta.setPixelsDimensionOrder(model::enums::DimensionOrder::XYZTC, i);
    meta.setPixelsType(model::enums::PixelType::UINT8, i);
    meta.setPixelsSizeX(256, i);
    meta.setPixelsSizeY(256, i);
    meta.setPixelsSizeZ(nz, i);
    meta.setPixelsSizeT(nt, i);
    meta.setPixelsSizeC(nc, i);

    // Plane count
    index_type pc = nz * nc * nt;

    // Loop over planes
    for(index_type p = 0; p < pc; ++p)
      {
        // Get the Z, T and C coordinate for this plane index
        array<dimension_size_type, 3> coord =
          getZCTCoords("XYZTC", nz, nc, nt, pc, p);

        // Set the plane position using the image index and plane
        // index to reference the correct plane
        meta.setPlaneTheZ(coord[0], i, p);
        meta.setPlaneTheT(coord[2], i, p);
        meta.setPlaneTheC(coord[1], i, p);
      }

    // Add MetadataOnly to Pixels since this is an example without
    // TiffData or BinData
    meta::OMEXMLMetadata *omexmlmeta = dynamic_cast<meta::OMEXMLMetadata *>(&meta);
    if (omexmlmeta)
      addMetadataOnly(*omexmlmeta, i);
  }

In addition to this basic metadata, it is possible to create and modify extended metadata elements. In the following example, we describe the setup of the microscope during acquisition, including its objective and detector parameters. Only a few parameters are set here; it is possible to completely describe the instrument configuration, including the settings on a per-image and per-channel basis if they vary during the course of acquisition.

    // There is one image with one channel in this image.
    MetadataStore::index_type image_idx = 0;
    MetadataStore::index_type channel_idx = 0;
    MetadataStore::index_type annotation_idx = 0;

    // Create an Instrument.
    MetadataStore::index_type instrument_idx = 0;
    std::string instrument_id = createID("Instrument", instrument_idx);
    store->setInstrumentID(instrument_id, instrument_idx);

    // Create an Objective for this Instrument.
    MetadataStore::index_type objective_idx = 0;
    std::string objective_id = createID("Objective",
                                        instrument_idx, objective_idx);
    store->setObjectiveID(objective_id, instrument_idx, objective_idx);
    store->setObjectiveManufacturer("InterFocal", instrument_idx, objective_idx);
    store->setObjectiveNominalMagnification(40, instrument_idx, objective_idx);
    store->setObjectiveLensNA(0.4, instrument_idx, objective_idx);
    store->setObjectiveImmersion(Immersion::OIL, instrument_idx, objective_idx);
    store->setObjectiveWorkingDistance({0.34, UnitsLength::MILLIMETER},
                                       instrument_idx, objective_idx);

    // Create a Detector for this Instrument.
    MetadataStore::index_type detector_idx = 0;
    std::string detector_id = createID("Detector", instrument_idx, detector_idx);
    store->setDetectorID(detector_id, instrument_idx, detector_idx);
    store->setObjectiveManufacturer("MegaCapture", instrument_idx, detector_idx);
    store->setDetectorType(DetectorType::CCD, instrument_idx, detector_idx);

    // Create Settings for this Detector for the Channel on the Image.
    store->setDetectorSettingsID(detector_id, image_idx, channel_idx);
    store->setDetectorSettingsBinning(Binning::TWOBYTWO, image_idx, channel_idx);
    store->setDetectorSettingsGain(56.89, image_idx, channel_idx);

If the existing data model elements and attributes are insufficient for describing the complexity of your hardware or experimental setup, it is possible to extend it with custom annotations. These annotations exist globally, but may be referenced by a model element where needed, and may be referenced by multiple model elements if required. In the following example, we create and attach an annotation to the Detector element, and then create and attach two annotations to the first Image element.

    // Create a MapAnnotation.
    MetadataStore::index_type map_annotation_idx = 0;
    std::string annotation_id = createID("Annotation", annotation_idx);
    store->setMapAnnotationID(annotation_id, map_annotation_idx);
    store->setMapAnnotationNamespace
      ("https://microscopy.example.com/colour-balance", map_annotation_idx);
    ome::xml::model::primitives::StringPairList map;
    map.push_back({"white-balance", "5,15,8"});
    map.push_back({"black-balance", "112,140,126"});
    store->setMapAnnotationValue(map, map_annotation_idx);

    // Link MapAnnotation to Detector.
    MetadataStore::index_type detector_ref_idx = 0;
    store->setDetectorAnnotationRef(annotation_id, instrument_idx, detector_idx,
                                    detector_ref_idx);

    // Create a LongAnnotation.
    ++annotation_idx;
    MetadataStore::index_type long_annotation_idx = 0;
    annotation_id = createID("Annotation", annotation_idx);
    store->setLongAnnotationID(annotation_id, long_annotation_idx);
    store->setLongAnnotationValue(239423, long_annotation_idx);
    store->setLongAnnotationNamespace
      ("https://microscopy.example.com/trigger-delay", long_annotation_idx);

    // Link LongAnnotation to Image.
    MetadataStore::index_type image_ref_idx = 0;
    store->setImageAnnotationRef(annotation_id, image_idx, image_ref_idx);

    // Create a second LongAnnotation.
    ++annotation_idx;
    ++long_annotation_idx;
    annotation_id = createID("Annotation", annotation_idx);
    store->setLongAnnotationID(annotation_id, long_annotation_idx);
    store->setLongAnnotationValue(934223, long_annotation_idx);
    store->setLongAnnotationNamespace
      ("https://microscopy.example.com/sample-number", long_annotation_idx);

    // Link second LongAnnotation to Image.
    ++image_ref_idx = 0;
    store->setImageAnnotationRef(annotation_id, image_idx, image_ref_idx);

    // Update all the annotation cross-references.
    store->resolveReferences();

Full example source: metadata-io.cpp and metadata-formatwriter.cpp

OME-XML data model objects

The data model objects are not typically used directly, but are created, modified and queried using the Metadata interfaces (above), so in practice these examples should not be needed.

To create a tree of OME-XML data model objects from OME-XML text:

    // XML DOM tree containing parsed file content
    ome::xml::DOMDocument inputdoc(ome::xml::createDocument(filename));
    // OME Model (needed only during parsing to track model object references)
    model::detail::OMEModel model;
    // OME Model root object
    shared_ptr<model::OME> modelroot(make_shared<model::OME>());
    // Fill OME model object tree from XML DOM tree
#ifdef OME_HAVE_XERCES_DOM
    modelroot->update(inputdoc.getDocumentElement(), model);
#elif OME_HAVE_QT6_DOM || OME_HAVE_QT5_DOM
    modelroot->update(inputdoc.documentElement(), model);
#endif

In this example, the OME-XML text is read from a file into a DOM tree. This could have been read directly from a string or stream if the source was not a file. The DOM tree is then processed using the OME root object’s update() method, which uses the data from the DOM tree elements to create a tree of corresponding model objects contained by the root object.

To reverse the process, taking a tree of OME-XML model objects and converting them back of OME-XML text:

    // Schema version to use
    const std::string schema("http://www.openmicroscopy.org/Schemas/OME/2016-06");
    // XML DOM tree (initially containing an empty OME root element)
#ifdef OME_HAVE_XERCES_DOM
    ome::xml::DOMDocument outputdoc(ome::common::xml::dom::createEmptyDocument(schema, "OME"));
#elif OME_HAVE_QT6_DOM || OME_HAVE_QT5_DOM
    ome::xml::DOMDocument outputdoc;
    ome::xml::DOMElement docroot = outputdoc.createElementNS(QString::fromUtf8(schema.c_str()),
                                                             QString::fromUtf8("OME"));
    outputdoc.appendChild(docroot);
#endif
    // Fill output DOM document from OME-XML model
    modelroot->asXMLElement(outputdoc);
    // Dump DOM tree as text to stream
#ifdef OME_HAVE_XERCES_DOM
    ome::common::xml::dom::writeDocument(outputdoc, stream);
#elif OME_HAVE_QT6_DOM || OME_HAVE_QT5_DOM
    std::string text;
    text = outputdoc.toString().toStdString();
    stream << text;
#endif

Here, the OME root object’s asXMLElement() method is used to copy the data from the OME root object and its children into an XML DOM tree. The DOM tree is then converted to text for output.

As shown previously for the MetadataStore API, it is also possible to create and modify extended metadata elements using the model objects directly. The following example demonstrates the setup of the microscope during acquisition, including its objective and detector parameters, to achieve the same effect as in the example above.

    // Get root OME object
    std::shared_ptr<ome::xml::meta::OMEXMLMetadataRoot>
      root(std::dynamic_pointer_cast<ome::xml::meta::OMEXMLMetadataRoot>
           (store->getRoot()));
    if (!root)
      return;

    MetadataStore::index_type annotation_idx = 0;

    // Create an Instrument.
    auto instrument = make_shared<ome::xml::model::Instrument>();
    instrument->setID(createID("Instrument", 0));
    root->addInstrument(instrument);

    // Create an Objective for this Instrument.
    auto objective = make_shared<ome::xml::model::Objective>();
    objective->setID(createID("Objective", 0));
    auto objective_manufacturer = std::make_shared<std::string>("InterFocal");
    objective->setManufacturer(objective_manufacturer);
    auto objective_nominal_mag = std::make_shared<double>(40.0);
    objective->setNominalMagnification(objective_nominal_mag);
    auto objective_na = std::make_shared<double>(0.4);
    objective->setLensNA(objective_na);
    auto objective_immersion = std::make_shared<Immersion>(Immersion::OIL);
    objective->setImmersion(objective_immersion);
    auto objective_wd = std::make_shared<Quantity<UnitsLength>>
      (0.34, UnitsLength::MILLIMETER);
    objective->setWorkingDistance(objective_wd);
    instrument->addObjective(objective);

    // Create a Detector for this Instrument.
    auto detector = make_shared<ome::xml::model::Detector>();
    std::string detector_id = createID("Detector", 0);
    detector->setID(detector_id);
    auto detector_manufacturer = std::make_shared<std::string>("MegaCapture");
    detector->setManufacturer(detector_manufacturer);
    auto detector_type = std::make_shared<DetectorType>(DetectorType::CCD);
    detector->setType(detector_type);
    instrument->addDetector(detector);

    // Get Image and Channel for future use.  Note for your own code,
    // you should check that the elements exist before accessing them;
    // here we know they are valid because we created them above.
    auto image = root->getImage(0);
    auto pixels = image->getPixels();
    auto channel0 = pixels->getChannel(0);
    auto channel1 = pixels->getChannel(1);
    auto channel2 = pixels->getChannel(2);

    // Create Settings for this Detector for each Channel on the Image.
    auto detector_settings0 = make_shared<ome::xml::model::DetectorSettings>();
    {
      detector_settings0->setID(detector_id);
      auto detector_binning = std::make_shared<Binning>(Binning::TWOBYTWO);
      detector_settings0->setBinning(detector_binning);
      auto detector_gain = std::make_shared<double>(83.81);
      detector_settings0->setGain(detector_gain);
      channel0->setDetectorSettings(detector_settings0);
    }

    auto detector_settings1 = make_shared<ome::xml::model::DetectorSettings>();
    {
      detector_settings1->setID(detector_id);
      auto detector_binning = std::make_shared<Binning>(Binning::TWOBYTWO);
      detector_settings1->setBinning(detector_binning);
      auto detector_gain = std::make_shared<double>(56.89);
      detector_settings1->setGain(detector_gain);
      channel1->setDetectorSettings(detector_settings1);
    }

    auto detector_settings2 = make_shared<ome::xml::model::DetectorSettings>();
    {
      detector_settings2->setID(detector_id);
      auto detector_binning = std::make_shared<Binning>(Binning::FOURBYFOUR);
      detector_settings2->setBinning(detector_binning);
      auto detector_gain = std::make_shared<double>(12.93);
      detector_settings2->setGain(detector_gain);
      channel2->setDetectorSettings(detector_settings2);
    }

Creating annotations and linking them to model objects is also possible using model objects directly:

    // Add Structured Annotations.
    auto sa = std::make_shared<ome::xml::model::StructuredAnnotations>();
    root->setStructuredAnnotations(sa);

    // Create a MapAnnotation.
    auto map_ann0 = std::make_shared<ome::xml::model::MapAnnotation>();
    std::string annotation_id = createID("Annotation", annotation_idx);
    map_ann0->setID(annotation_id);
    auto map_ann0_ns = std::make_shared<std::string>
      ("https://microscopy.example.com/colour-balance");
    map_ann0->setNamespace(map_ann0_ns);
    ome::xml::model::primitives::StringPairList map;
    map.push_back({"white-balance", "5,15,8"});
    map.push_back({"black-balance", "112,140,126"});
    map_ann0->setValue(map);
    sa->addMapAnnotation(map_ann0);

    // Link MapAnnotation to Detector.
    detector->linkAnnotation(map_ann0);

    // Create a LongAnnotation.
    auto long_ann0 = std::make_shared<ome::xml::model::LongAnnotation>();
    ++annotation_idx;
    annotation_id = createID("Annotation", annotation_idx);
    long_ann0->setID(annotation_id);
    auto long_ann0_ns = std::make_shared<std::string>
      ("https://microscopy.example.com/trigger-delay");
    long_ann0->setNamespace(long_ann0_ns);
    long_ann0->setValue(239423);
    sa->addLongAnnotation(long_ann0);

    // Link LongAnnotation to Image.
    image->linkAnnotation(long_ann0);

    // Create a second LongAnnotation.
    auto long_ann1 = std::make_shared<ome::xml::model::LongAnnotation>();
    ++annotation_idx;
    annotation_id = createID("Annotation", annotation_idx);
    long_ann1->setID(annotation_id);
    auto long_ann1_ns = std::make_shared<std::string>
      ("https://microscopy.example.com/sample-number");
    long_ann1->setNamespace(long_ann1_ns);
    long_ann1->setValue(934223);
    sa->addLongAnnotation(long_ann1);

    // Link second LongAnnotation to Image.
    image->linkAnnotation(long_ann1);

Full example source: model-io.cpp and metadata-formatwriter2.cpp

Pixel data

The Bio-Formats Java implementation stores and passes pixel values in a raw byte array. Due to limitations with C++ array passing, this was not possible for the OME-Files C++ implementation. While a vector or other container could have been used, several problems remain. The type and endianness of the data in the raw bytes is not known, and the dimension ordering and dimension extents are also unknown, which imposes a significant burden on the programmer to correctly process the data. The C++ implementation provides two types to solve these problems.

The PixelBuffer class is a container of pixel data. It is a template class, templated on the pixel type in use. The class contains the order of the dimensions, and the size of each dimension, making it possible to process pixel data without need for externally-provided metadata to describe its structure. This class may be used to contain and process pixel data of a specific pixel type. Internally, the pixel data is contained within a boost::multi_array as a 9D hyper-volume, though its usage in this release of OME-Files is limited to 5D. The class can either contain its own memory allocation for pixel data, or it can reference memory allocated or mapped externally, allowing use with memory-mapped data, for example.

In many situations, it is desirable to work with arbitrary pixel types, or at least the set of pixel types defined in the OME data model in its PixelType enumeration. The VariantPixelBuffer fulfills this need, using ome::compat::variant to allow it to contain a PixelBuffer specialized for any of the pixel types in the OME data model. This is used to allow transfer and processing of any supported pixel type, for example by the FormatReader class’ getLookupTable() and openBytes() methods, and the corresponding FormatWriter class’ setLookupTable() and saveBytes() methods.

An additional problem with supporting many different pixel types is that each operation upon the pixel data, for example for display or analysis, may require implementing separately for each pixel type. This imposes a significant testing and maintenance burden. VariantPixelBuffer solves this problem through use of ome::compat::visit() and static visitor classes, which allow algorithms to be defined in a template and compiled for each pixel type. They also allow algorithms to be specialized for different classes of pixel type, for example signed vs. unsigned, integer vs. floating point, or simple vs. complex, or special-cased per type e.g. for bitmasks. When ome::compat::visit() is called with a specified algorithm and VariantPixelBuffer object, it will select the matching algorithm for the pixel type contained within the buffer, and then invoke it on the buffer. This permits the programmer to support arbitrary pixel types without creating a maintenance nightmare, and without unnecessary code duplication.

The 9D pixel buffer makes a distinction between the logical dimension order (used by the API) and the storage order (the layout of the pixel data in memory). The logical order is defined by the values in the Dimensions enum. The storage order is specified by the programmer when creating a pixel buffer.

The following example shows creation of a pixel buffer with a defined size, and default storage order:

    // Language type for FLOAT pixel data
    typedef PixelProperties<PixelType::FLOAT>::std_type float_pixel_type;
    // Create PixelBuffer for floating point data
    // X=512 Y=512 Z=16 S=1
    PixelBuffer<float_pixel_type> buffer
      (boost::extents[512][512][16][1], PixelType::FLOAT);

The storage order may be set explicitly. The order may be created by hand, or with a helper function. While the helper function is limited to supporting the ordering defined by the data model, specifying the order by hand allows additional flexibility. Manual ordering may be used to allow the indexing for individual dimensions to run backward rather than forward, which is useful if the Y-axis requires inverting, for example. The following example shows creation of two pixel buffers with defined storage order using the helper function:

    // Language type for UINT16 pixel data
    typedef PixelProperties<PixelType::UINT16>::std_type uint16_pixel_type;
    // Storage order is XYZS; samples are not interleaved
    // ("planar") after XYZ
    PixelBufferBase::storage_order_type order1
      (PixelBufferBase::make_storage_order(false));
    // Create PixelBuffer for unsigned 16-bit data with specified
    // storage order
    // X=512 Y=512 Z=16 S=1
    PixelBuffer<uint16_pixel_type> buffer1
      (boost::extents[512][512][16][1],
       PixelType::UINT16,
       order1);

    // Language type for INT8 pixel data
    typedef PixelProperties<PixelType::INT8>::std_type int8_pixel_type;
    // Storage order is SXYZ; samples are interleaved
    // ("chunky") before XYZ
    PixelBufferBase::storage_order_type order2
      (PixelBufferBase::make_storage_order(true));
    // Create PixelBuffer for signed 8-bit RGB data with specified storage
    // order
    // X=1024 Y=1024 Z=1 S=3
    PixelBuffer<int8_pixel_type> buffer2
      (boost::extents[1024][1024][1][2],
       PixelType::INT8,
       order2);

Note that the logical order of the dimension extents is unchanged.

Sometimes it may be necessary to change the storage order of data, for example to give it the appropriate structure to pass to another library with specific ordering requirements. This can be done by a simple assignment between two buffers having a different storage order; the dimension extents must be of the same size for the buffers to be compatible. The following example demonstrates conversion of planar data to contiguous:

    // Language type for FLOAT pixel data
    typedef PixelProperties<PixelType::FLOAT>::std_type float_pixel_type;
    // Storage order is XYZS; samples are not interleaved
    // ("planar") after XYZ
    PixelBufferBase::storage_order_type planar_order
      (PixelBufferBase::make_storage_order(false));
    // Storage order is SXYZ; samples are interleaved ("chunky" or
    // "contiguous") before XYZ
    PixelBufferBase::storage_order_type contiguous_order
      (PixelBufferBase::make_storage_order(true));

    // Create PixelBuffer for float data with planar order
    // X=512 Y=512 Z=16 S=1
    PixelBuffer<float_pixel_type> planar_buffer
      (boost::extents[512][512][16][1],
       PixelType::FLOAT,
       planar_order);

    // Create PixelBuffer for float data with contiguous order
    // X=512 Y=512 Z=16 S=1
    PixelBuffer<float_pixel_type> contiguous_buffer
      (boost::extents[512][512][16][1],
       PixelType::FLOAT,
       contiguous_order);

    // Transfer the pixel data from the planar buffer to the
    // contiguous buffer; the pixel data will be reordered
    // appropriately during the transfer.
    contiguous_buffer = planar_buffer;

In-place conversion is not yet supported.

In practice, it is unlikely that you will need to create any PixelBuffer objects directly. The FormatReader and FormatWriter interfaces use VariantPixelBuffer objects, and in the case of the reader interface the getLookupTable() and openBytes() methods can be passed a default-constructed VariantPixelBuffer and it will be set up automatically, changing the image dimensions, dimension order and pixel type to match the data being fetched, if the size, order and type do not match. For example, to read all pixel data in an image using openBytes():

  void
  readPixelData(const FormatReader& reader,
                std::ostream&       stream)
  {
    // Get total number of images (series)
    dimension_size_type ic = reader.getSeriesCount();
    stream << "Image count: " << ic << '\n';

    // Loop over images
    for (dimension_size_type i = 0 ; i < ic; ++i)
      {
        // Change the current series to this index
        reader.setSeries(i);

        // Get total number of planes (for this image index)
        dimension_size_type pc = reader.getImageCount();
        stream << "\tPlane count: " << pc << '\n';

        // Pixel buffer
        VariantPixelBuffer buf;

        // Loop over planes (for this image index)
        for (dimension_size_type p = 0 ; p < pc; ++p)
          {
            // Read the entire plane into the pixel buffer.
            reader.openBytes(p, buf);

            // If this wasn't an example, we would do something
            // exciting with the pixel data here.
            stream << "Pixel data for Image " << i
                   << " Plane " << p << " contains "
                   << buf.num_elements() << " pixels\n";
          }
      }
  }

To perform the reverse process, writing pixel data with saveBytes():

    // Total number of images (series)
    dimension_size_type ic = writer.getMetadataRetrieve()->getImageCount();
    stream << "Image count: " << ic << '\n';

    // Loop over images
    for (dimension_size_type i = 0 ; i < ic; ++i)
      {
        // Change the current series to this index
        writer.setSeries(i);

        // Total number of planes.
        dimension_size_type pc = 1U;
        pc *= writer.getMetadataRetrieve()->getPixelsSizeZ(i);
        pc *= writer.getMetadataRetrieve()->getPixelsSizeT(i);
        pc *= writer.getMetadataRetrieve()->getChannelCount(i);
        stream << "\tPlane count: " << pc << '\n';

        // Loop over planes (for this image index)
        for (dimension_size_type p = 0 ; p < pc; ++p)
          {
            // Pixel buffer; size 512 × 512 with 3 samples of type
            // uint16_t.  It has a storage order of XYZTC without
            // interleaving (samples are planar).
            shared_ptr<PixelBuffer<PixelProperties<PixelType::UINT16>::std_type>>
              buffer(make_shared<PixelBuffer<PixelProperties<PixelType::UINT16>::std_type>>
                     (boost::extents[512][512][1][3],
                      PixelType::UINT16,
                      PixelBufferBase::make_storage_order(false)));

            // Fill each R, G or B sample with a different intensity
            // ramp in the 12-bit range.  In a real program, the pixel
            // data would typically be obtained from data acquisition
            // or another image.
            for (dimension_size_type x = 0; x < 512; ++x)
              for (dimension_size_type y = 0; y < 512; ++y)
                {
                  PixelBufferBase::indices_type idx;
                  std::fill(idx.begin(), idx.end(), 0);
                  idx[DIM_SPATIAL_X] = x;
                  idx[DIM_SPATIAL_Y] = y;

                  idx[DIM_SAMPLE] = 0;
                  buffer->at(idx) = (static_cast<float>(x) / 512.0f) * 4096.0f;
                  idx[DIM_SAMPLE] = 1;
                  buffer->at(idx) = (static_cast<float>(y) / 512.0f) * 4096.0f;
                  idx[DIM_SAMPLE] = 2;
                  buffer->at(idx) = (static_cast<float>(x+y) / 1024.0f) * 4096.0f;
                }

            VariantPixelBuffer vbuffer(buffer);
            stream << "PixelBuffer PixelType is " << buffer->pixelType() << '\n';
            stream << "VariantPixelBuffer PixelType is " << vbuffer.pixelType() << '\n';
            stream << std::flush;

            // Write the the entire pixel buffer to the plane.
            writer.saveBytes(p, vbuffer);

            stream << "Wrote " << buffer->num_elements() << ' '
                   << buffer->pixelType() << " pixels\n";
          }
      }

Both buffer classes provide access to the pixel data so that it may be accessed, manipulated and passed elsewhere. The PixelBuffer class provides an at method. This allows access to individual pixel values using a 9D coordinate:

    // Set all pixel values for Z=2 and S=1 to 0.5
    // 4D index, default values to zero if unused
    PixelBuffer<float_pixel_type>::indices_type idx;
    // Set Z and C indices
    idx[ome::files::DIM_SPATIAL_Z] = 2;
    idx[ome::files::DIM_SAMPLE] = 1;

    for (uint16_t x = 0; x < 512; ++x)
      {
        idx[ome::files::DIM_SPATIAL_X] = x;
        for (uint16_t y = 0; y < 512; ++y)
          {
            idx[ome::files::DIM_SPATIAL_Y] = y;
            buffer.at(idx) = 0.5f;
          }
      }

Conceptually, this is the same as using an index for a normal 1D array, but extended to use an array of nine indices for each of the nine dimensions, in the logical storage order. The VariantPixelBuffer does not provide an at method for efficiency reasons. Instead, visitors should be used for the processing of bulk pixel data. For example, this is one way the minimum and maximum pixel values could be obtained:

  // Visitor to compute min and max pixel value for pixel buffer of
  // any pixel type
  // The static_visitor specialization is the required return type of
  // the operator() methods and boost::apply_visitor()
  struct MinMaxVisitor
  {
    // The min and max values will be returned in a pair.  double is
    // used since it can contain the value for any pixel type
    typedef std::pair<double, double> result_type;

    // Get min and max for any non-complex pixel type
    template<typename T>
    result_type
    operator() (const T& v)
    {
      typedef typename T::element_type::value_type value_type;

      value_type *min = std::min_element(v->data(),
                                         v->data() + v->num_elements());
      value_type *max = std::max_element(v->data(),
                                         v->data() + v->num_elements());

      return result_type(static_cast<double>(*min),
                         static_cast<double>(*max));
    }

    // Less than comparison for real part of complex numbers
    template <typename T>
    static bool
    complex_real_less(const T& lhs, const T& rhs)
    {
      return std::real(lhs) < std::real(rhs);
    }

    // Greater than comparison for real part of complex numbers
    template <typename T>
    static bool
    complex_real_greater(const T& lhs, const T& rhs)
    {
      return std::real(lhs) > std::real(rhs);
    }

    // Get min and max for complex pixel types (COMPLEXFLOAT and
    // COMPLEXDOUBLE)
    // This is the same as for simple pixel types, except for the
    // addition of custom comparison functions and conversion of the
    // result to the real part.
    template <typename T>
    typename boost::enable_if_c<
      boost::is_complex<T>::value, result_type
      >::type
    operator() (const std::shared_ptr<PixelBuffer<T>>& v)
    {
      typedef T value_type;

      value_type *min = std::min_element(v->data(),
                                         v->data() + v->num_elements(),
                                         complex_real_less<T>);
      value_type *max = std::max_element(v->data(),
                                         v->data() + v->num_elements(),
                                         complex_real_greater<T>);

      return result_type(static_cast<double>(std::real(*min)),
                         static_cast<double>(std::real(*max)));
    }
  };

  void
  applyVariant()
  {
    // Make variant buffer (int32, 16×16 single plane)
    VariantPixelBuffer variant(boost::extents[16][16][1][1],
                               PixelType::INT32);
    // Get buffer size
    VariantPixelBuffer::size_type size = variant.num_elements();
    // Create sample random-ish data
    std::vector<int32_t> vec;
    for (VariantPixelBuffer::size_type i = 0; i < size; ++i)
      {
        int32_t val = static_cast<int32_t>(i + 42);
        vec.push_back(val);
      }
    std::random_device rd;
    std::mt19937 g(rd());
    std::shuffle(vec.begin(), vec.end(), g);
    // Assign sample data to buffer.
    variant.assign(vec.begin(), vec.end());

    // Create and apply visitor
    MinMaxVisitor visitor;
    MinMaxVisitor::result_type result = std::visit(visitor, variant.vbuffer());

    std::cout << "Min is " << result.first
              << ", max is " << result.second << '\n';
  }

This example demonstrates several features:

  • The visitor operators can return values to the caller (for more complex algorithms, the visitor class could use member variables and additional methods)
  • The operator is expanded once for each pixel type
  • The operators can be special-cased for individual pixel types; here we use the SFINAE rule to implement a specialization for an entire category of pixel types (complex numbers), but standard function overloading and templates will also work for more common cases
  • Pixel data can be assigned to the buffer with a single assign() call.

The OME-Files source uses pixel buffer visitors for several purposes, for example to load pixel data into OpenGL textures, which automatically handles pixel format conversion and repacking of pixel data as needed.

While the pixel buffers may appear complex, they do permit the OME Files library to support all pixel types with relative ease, and it will allow your applications to also handle multiple pixel types by writing your own visitors. Assignment of one buffer to another will also repack the pixel data if they use different storage ordering (i.e. the logical ordering is used for the copy), which can be useful if you need the pixel data in a defined ordering.

If all you want is access to the raw data, as in the Java API, you are not required to use the above features. Simply use the data() method on the buffer to get a pointer to the raw data. Note that you will need to multiply the buffer size obtained with num_elements() by the size of the pixel type (use bytesPerPixel() or sizeof on the buffer value_type).

Alternatively, it is also possible to access the underlying boost::multi_array using the array() method, if you need access to functionality not wrapped by PixelBuffer.

Full example source: pixeldata.cpp

Reading images

Image reading is performed using the FormatReader interface. This is an abstract reader interface implemented by file-format-specific reader classes. Examples of readers include TIFFReader, which implements reading of Baseline TIFF (optionally with additional ImageJ metadata), and OMETIFFReader which implements reading of OME-TIFF (TIFF with OME-XML metadata).

Using a reader involves these steps:

  1. Create a reader instance.
  2. Set options to control reader behavior.
  3. Call setId() to specify the image file to read.
  4. Retrieve desired metadata and pixel data.
  5. Close the reader.

These steps are illustrated in this example:

          // Create TIFF reader
          shared_ptr<FormatReader> reader(make_shared<TIFFReader>());

          // Set reader options before opening a file
          reader->setGroupFiles(true);

          // Open the file
          reader->setId(filename);

          // Display series core metadata
          readMetadata(*reader, std::cout);

          // Display global and series original metadata
          readOriginalMetadata(*reader, std::cout);

          // Read pixel data
          readPixelData(*reader, std::cout);

          // Explicitly close reader
          reader->close();

Here we create a reader to read TIFF files, set two options (metadata filtering and file grouping), and then call setId(). At this point the reader has been set up and initialized, and we can then read metadata and pixel data, which we covered in the preceding sections. You might like to combine this example with the MinMaxVisitor example to make it display the minimum and maximum values for each plane in an image; if you try running the example with TIFF images of different pixel types, it will transparently adapt to any supported pixel type.

Note

Reader option-setting methods may only be called before setId(). Reader state changing and querying methods such as setSeries() and getSeries(), metadata retrieval and pixel data retrieval methods may only be called after setId(). If these constraints are violated, a FormatException will be thrown.

Full example source: metadata-formatreader.cpp

Writing images

Image writing is performed using the FormatWriter interface. This is an abstract writer interface implemented by file-format-specific writer classes. Examples of writers include MinimalTIFFWriter, which implements writing of Baseline TIFF and OMETIFFWriter which implements writing of OME-TIFF (TIFF with OME-XML metadata).

Using a writer involves these steps:

  1. Create a writer instance.
  2. Set metadata store to use.
  3. Set options to control writer behavior.
  4. Call setId() to specify the image file to write.
  5. Store pixel data for each plane of each image in the specified dimension order.
  6. Close the writer.

These steps are illustrated in this example:

          // Create minimal metadata for the file to be written.
          auto meta = createMetadata();
          // Add extended metadata.
          addExtendedMetadata(meta);

          // Create TIFF writer
          auto writer = make_shared<OMETIFFWriter>();

          // Set writer options before opening a file
          auto retrieve = static_pointer_cast<MetadataRetrieve>(meta);
          writer->setMetadataRetrieve(retrieve);
          writer->setInterleaved(false);
          writer->setTileSizeX(256);
          writer->setTileSizeY(256);

          // Open the file
          writer->setId(filename);

          // Write pixel data
          writePixelData(*writer, std::cout);

          // Explicitly close writer
          writer->close();

Here we create a writer to write OME-TIFF files, set the metadata store using metadata we create, then set a writer option (sample interleaving), and then call setId(). At this point the writer has been set up and initialized, and we can then write the pixel data, which we covered in the preceding sections. Finally we call close() to flush all data.

Note

Metadata store setting and writer option-setting methods may only be called before setId(). Writer state changing and querying methods such as setSeries() and getSeries(), and pixel data storage methods may only be called after setId(). If these constraints are violated, a FormatException will be thrown.

Note

close() should be called explicitly to catch any errors. While this will be called by the destructor, the destructor can’t throw exceptions and any errors will be silently ignored.

Full example source: metadata-formatwriter.cpp

Writing sub-resolution images

Very large images may be accompanied by reduced-resolution copies of the full resolution image. These are also known as image “pyramids”, with the full-resolution image being reduced in size typically by successive power of two reductions. For example, if the full resolution image measured 65536 × 65536 pixels, the reductions might be be 32768 × 32768, 16384 × 16384, 8192 × 8192 and so on. While power of two reductions are conventional, power of three or reductions of arbitrary size are possible. The file format and the writer API place no restrictions upon the possible sizes, except that each reduction must be smaller in at least one of the X or Y dimensions.

Note

Reductions in Z are not currently supported due to the OME data model being plane-based, with TiffData elements referencing planes, and the TIFF SubIFD field only permitting reduction in size of a single plane. This limitation may be lifted with future model and file format changes.

Writing is essentially the same as the Writing images example, above, with a few additional steps. The first step is to set the sub-resolution levels for each series which requires them, by adding them to the metadata store:

    // OME-XML metadata store.
    auto meta = make_shared<OMEXMLMetadata>();

    // Full image size is 2ᵒʳᵈᵉʳ.
    constexpr dimension_size_type order = 12U;
    static_assert(order > 7U, "Image size too small to generate sub-resolutions");

    // Create simple CoreMetadata and use this to set up the OME-XML
    // metadata.  This is purely for convenience in this example; a
    // real writer would typically set up the OME-XML metadata from an
    // existing MetadataRetrieve instance or by hand.
    std::vector<shared_ptr<CoreMetadata>> seriesList;
    shared_ptr<CoreMetadata> core(make_shared<CoreMetadata>());
    core->sizeX = std::pow(2UL, order);
    core->sizeY = std::pow(2UL, order);
    core->sizeC.clear(); // Defaults to 1 channel with 1 sample; clear this.
    core->sizeC.push_back(3U); // Replace with single RGB channel (3 samples).
    core->pixelType = ome::xml::model::enums::PixelType::UINT8;
    core->interleaved = true;
    core->bitsPerPixel = 8U;
    core->dimensionOrder = DimensionOrder::XYZTC;
    seriesList.push_back(core);
    seriesList.push_back(core); // Add two identical series.

    fillMetadata(*meta, seriesList);

    // Add sub-resolution levels as power of two reductions.
    std::vector<std::array<dimension_size_type, 3>> levels;
    for (dimension_size_type i = order - 1; i > 7U; --i)
      levels.push_back({static_cast<dimension_size_type>(std::pow(2UL, i)), // X
                        static_cast<dimension_size_type>(std::pow(2UL, i)), // Y
                        1UL});                                              // Z (placeholder)
    for (dimension_size_type s = 0; s < seriesList.size(); ++s)
        ome::files::addResolutions(*meta, s, levels);

In the above example, we compute the list of resolution levels automatically. The addResolutions() helper function adds these to the metadata store as a custom annotation linked to the specified image series. If you need to, you can remove them with the corresponding removeResolutions() function, or retrieve it with the getResolutions() function. There are additional functions to add, remove and get all resolution levels for all series at once.

Note

The resolution annotations will be removed from the metadata store prior to generating the OME-XML to be embedded in the OME-TIFF file being written. This is because these annotations are only used to provide the needed resolution information to the writer, and are not needed for reading since the resolution information is stored directly in the TIFF format as SubIFD fields.

Next, we will add some writer options:

          writer->setInterleaved(true);
          writer->setTileSizeX(256);
          writer->setTileSizeY(256);
          writer->setCompression("Deflate");

These options include interleaving (so that the RGB samples are stored together rather than as separate planes), tiling (to improve random access to big image planes), and compression (to reduce the image size of such a large image). These are all optional, but will generally improve efficiency in both file size and read time.

Lastly, we can call setId() to initialise the writer with all the above options, and then write the pixel data:

          // Total number of images (series)
          dimension_size_type ic = writer->getSeriesCount();

          // Loop over images
          for (dimension_size_type i = 0 ; i < ic; ++i)
            {
              // Change the current series to this index
              writer->setSeries(i);

              // Total number of resolutions for this series
              dimension_size_type rc = writer->getResolutionCount();

              // Loop over resolutions
              for (dimension_size_type r = 0 ; r < rc; ++r)
                {
                  // Change the current resolution to this index
                  writer->setResolution(r);

                  std::cout << "Writing series " << i
                            << " resolution " << r << '\n';

                  // Write a fractal tile-by-tile for this resolution.
                  FractalType ft = (i % 2) ? FractalType::JULIA : FractalType::MANDELBROT;
                  write_fractal(*writer, ft, std::cout);
                }
            }

In the previous example, we used the methods getSeriesCount() and setSeries() to find out the total number of images to write, and to switch between them in ascending order to write out the pixel data associated with each image. In this example, we also make use of getResolutionCount() and setResolution() to additionally write out the reduced-resolution copies of each image. As for the use of setSeries() in the previous example, the resolution levels also require writing in strictly ascending order.

For the purposes of this example, the pixel data written here is a fractal from the Mandelbrot or Julia set, since they can be scaled infinitely and rendered as individual tiles. In this example, we use 16× multisampling to smooth the rendered image and also use multiple threads to generate and write the tiles concurrently, which may be of interest if you wish to write tiled images in parallel. The first image is the Mandelbrot set, the second is from the Julia set:

_images/mandelbrot.png _images/julia.png

As the image size is progressively reduced, there will be correspondingly less detail in the image. The lookup tables and constants may be adjusted to alter the images.

Full example source: subresolution.cpp, fractal.cpp, fractal.h