You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Most of the sensors required for the heliotropic platter could be condensed in to a single hyperspectral camera - if suitable components can be found.
Sensors
There are two main options identified so far (from cameras list):
iMEC– ~600-1000 nm - already in production and purported to be affordable (although I've yet to see a price list). Existing cameras based on the tech are available, including very small ones that would fit within the tight confines of the photonics module.
Hamamatsu – 165 to 1100 nm - easily the best option at the moment but poses many technical challenges (particularly temperature control).
Possible future contenders (not yet in production as far as I can tell) - all of these are targeting mobile phone market; tiny, inexpensive, low power and easy-to-integrate sensors:
More basic options including limiting imaging to multispectral, just IR or very low resolution (often single pixel):
Melexis - FIR sensor which works within desirable specifications (especially temperature).
ULIS - thermal imaging cameras, much better resolution than Melexis
Espros - like an RGB sensor, but does 400 to 900 nm (in currently released product as far as I can tell; earlier press releases stated there would be two variants: 387 – 903 nm, and 776 – 1064 nm)
Many of the existing hyperspectral cameras have very constrained operational temperature specifications which would make them unsuitable for use in weather extremes.
They also consume lots of energy, in the order of hundreds of milliamps. This severely limits the number of images that can be generated per day due to energy harvesting constraints. We would likely need a supplementary external power source (larger PV array for example) to operate at full capability.
Processing raw data would likely require either a FPGA for realtime processing, or remote server for post-processing.
Transmitting hypercube files would eat up bandwidth and power, especially over cellular connections.
Ideally we'd want to merge spectra in realtime; similar to infragrams, but with custom spectra being applied to RGB image. This would greatly reduce telemetry bandwidth and provide images in a simple format.
In cases where we do want to retain hypercube we would likely need either additional RAM for buffering or limit the number of spectral layers per capture.
Real time camera mode (eg. for drone use) would likely prove troublesome.
The text was updated successfully, but these errors were encountered:
Most of the sensors required for the heliotropic platter could be condensed in to a single hyperspectral camera - if suitable components can be found.
Sensors
There are two main options identified so far (from cameras list):
Possible future contenders (not yet in production as far as I can tell) - all of these are targeting mobile phone market; tiny, inexpensive, low power and easy-to-integrate sensors:
More basic options including limiting imaging to multispectral, just IR or very low resolution (often single pixel):
Tech issues
Many of the existing hyperspectral cameras have very constrained operational temperature specifications which would make them unsuitable for use in weather extremes.
They also consume lots of energy, in the order of hundreds of milliamps. This severely limits the number of images that can be generated per day due to energy harvesting constraints. We would likely need a supplementary external power source (larger PV array for example) to operate at full capability.
Processing raw data would likely require either a FPGA for realtime processing, or remote server for post-processing.
Transmitting hypercube files would eat up bandwidth and power, especially over cellular connections.
Ideally we'd want to merge spectra in realtime; similar to infragrams, but with custom spectra being applied to RGB image. This would greatly reduce telemetry bandwidth and provide images in a simple format.
In cases where we do want to retain hypercube we would likely need either additional RAM for buffering or limit the number of spectral layers per capture.
Real time camera mode (eg. for drone use) would likely prove troublesome.
The text was updated successfully, but these errors were encountered: