Skip to content

Missing: Short explanation of the image capture pipeline #212

Open
@CaptainSifff

Description

@CaptainSifff

Hi all,
as a followup to todays DataCarpentry onboarding I feel like a image processing workshop aimed at beginners should contain a short description of the image processing pipeline.
Lacking any pictures myself, I will put a couple of diagrams in here, that I found using some google searches, so licensing may vary...

First We start with this overview diagram:
overview of image capture

To summarize(kind of in reverse), here we see that a rasterized image can only be captured by an imaging device, if an object is properly lighted.
To highlight the first couple of issues

  • Lighting affects the visual perception of an object. light in a particular color can only be captured if the object receives this color and actually reflects it towards the device.
  • All known devices today rasterize the image into something called pixels, which defines the lowest available resolution. The light intensity captured by this pixel therefore is the integral across the surface area of this pixel.

Moving forward to the imaging device itself, we can use a diagram like this:
in-photo-camera capture
To summarize, we see that from the outside the entire electromagnetic spectrum has to pass through the lens, optionally pass some filters in order to reach the sensor that converts photon counts to intensity levels in pictures.

This brings us to the issues that can occur in a device:

  • assuming no filtering is present, the lens system affects the picture, obviously
  • Intensity levels: there is a minimum sensitivity level and a maximum intensity in all devices that they can process. Hence a quantization procedure has to be applied, in order to transform intensities to values.
  • color rasterization: In 99.9% of all consumer devices a color pattern is used(the image shows the so-called Bayer Pattern) to generate from a single capture of photon intensities a color image. This means that adjacent pixels only have information about particular color channels, and the other channels are obtained via interpolation.

Another diagram showing the entire pipeline

Pictures from here:
[1]
[2]
And then we are at the point where a camera chooses a file format.
File Format are chosen according to enhance transferability in a certain application. If your're looking at a image gathered by an expensive microscope with a program supplied by a vendor, chances are that the vendor has a file format for this particular application. If you're looking at camera photos with your friends a file format that is portable and adapted to the human visual system, (e.g. JPEG or JPEG2000) is chosen. Diagrams are often transferred using PNG.

Things we discussed today, but that I have not touched upon here:

  • The generation of information, or the destruction of information using resizing operations.
  • Color spaces
  • psychological perception

Thanks for all your work on this episode!

Metadata

Metadata

Assignees

No one assigned

    Labels

    help wantedLooking for Contributorstype:discussionDiscussion or feedback about the lesson

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions