Homography.js is a lightweight High-Performance library for implementing homographies in Javascript or Node.js. It is designed to be easy-to-use (even for developers that are not familiar with Computer Vision), and able to run in real time applications (even in low-spec devices such as budget smartphones). It allows you to perform Affine, Projective or Piecewise Affine warpings over any Image
or HTMLElement
in your application by only setting a small set of reference points. Additionally, Image warpings can be made persistent (independent of any CSS property), so they can be easily drawn in a canvas, mixed or downloaded. Homography.js is built in a way that frees the user from all the pain-in-the-ass details of homography operations, such as thinking about output dimensions, input coordinate ranges, dealing with unexpected shifts, pads, crops or unfilled pixels in the output image or even knowing what a Transform Matrix is.
- Apply different warpings to any
Image
orHTMLElement
by just setting two sets of reference points. - Perform Affine, Projective or Piecewise Affine transforms or just set Auto and let the library decide which transform to apply depending on the reference points you provide.
- Simplify how you deal with canvas drawings, or subsequent Computer Vision problems by making your
Image
transforms persistent and independent of any CSS property. - Forget all the pain-in-the-ass details of homography operations, even if you only have fuzzy idea about what an homography is.
- Avoid warping delays in real-time applications due to its design focused on High-Performance.
- Support for running in the backend with Node.js.
To use as a module in the browser (Recommended):
<script type="module">
import { Homography } from "https://cdn.jsdelivr.net/gh/Eric-Canas/Homography.js@1.4/Homography.js";
</script>
If you don't need to perform Piecewise Affine Transforms, you can also use a very lightweight UMD build that will expose the homography
global variable and will charge faster:
<script src="https://cdn.jsdelivr.net/gh/Eric-Canas/Homography.js@1.4/HomographyLightweight.min.js"></script>
...
// And then in your script
const myHomography = new homography.Homography();
// Remember to don't override the homography variable by naming your object "homography"
Via npm:
$ npm install homography
...
import { Homography } from "homography";
Perform a basic Piecewise Affine Transform from four source points.
// Select the image you want to warp
const image = document.getElementById("myImage");
// Define the reference points. In this case using normalized coordinates (from 0.0 to 1.0).
const srcPoints = [[0, 0], [0, 1], [1, 0], [1, 1]];
const dstPoints = [[1/5, 1/5], [0, 1/2], [1, 0], [6/8, 6/8]];
// Create a Homography object for a "piecewiseaffine" transform (it could be reused later)
const myHomography = new Homography("piecewiseaffine");
// Set the reference points
myHomography.setReferencePoints(srcPoints, dstPoints);
// Warp your image
const resultImage = myHomography.warp(image);
...
Perform a complex Piecewise Affine Transform from a large set of pointsInY * pointsInX
reference points.
...
// Define a set of reference points that match to a sinusoidal form.
// In this case in image axis (x : From 0 to width, y : From 0 to height) for convenience.
let srcPoints = [], dstPoints = [];
for (let y = 0; y <= h; y+=height/pointsInY){
for (let x = 0; x <= w; x+=width/pointsInX){
srcPoints.push([x, y]); // Add (x, y) as source points
dstPoints.push([x, amplitude+y+Math.sin((x*n)/Math.PI)*amplitude]); // Apply sinus function on y
}
}
// Set the reference points (reuse the previous Homography object)
myHomography.setReferencePoints(srcPoints, dstPoints);
// Warp your image. As not image is given, it will reuse the one used for the previous example.
const resultImage = myHomography.warp();
...
Perform a simple Affine Transform and apply it on a HTMLElement
.
...
// Set the reference points from which estimate the transform
const srcPoints = [[0, 0], [0, 1], [1, 0]];
const dstPoints = [[0, 0], [1/2, 1], [1, 1/8]];
// Don't specify the type of transform to apply, so let the library decide it by itself.
const myHomography = new Homography(); // Default transform value is "auto".
// Apply the transform over an HTMLElement from the DOM.
myHomography.transformHTMLElement(document.getElementById("inputText"), squarePoints, rectanglePoints);
...
Calculate 250 different Projective Transforms, apply them over the same input Image
and draw them on a canvas.
const ctx = document.getElementById("exampleCanvas").getContext("2d");
// Build the initial reference points (in this case, in image coordinates just for convenience)
const srcPoints = [[0, 0], [0, h], [w, 0], [w, h]];
let dstPoints = [[0, 0], [0, h], [w, 0], [w, h]];
// Create the homography object (it is not necessary to set transform as "projective" as it will be automatically detected)
const myHomography = new Homography();
// Set the static parameters of all the transforms sequence (it will improve the performance of subsequent warpings)
myHomography.setSourcePoints(srcPoints);
myHomography.setImage(inputImg);
// Set the parameters for building the future dstPoints at each frame (5 movements of 50 frames each one)
const framesPerMovement = 50;
const movements = [[[0, h/5], [0, -h/5], [0, 0], [0, 0]],
[[w, 0], [w, 0], [-w, 0], [-w, 0]],
[[0, -h/5], [0, h/5], [0, h/5], [0, -h/5]],
[[-w, 0], [-w, 0], [w, 0], [w, 0]],
[[0, 0], [0, 0], [0, -h/5], [0, h/5]]];
for(let movement = 0; movement<movements.length; movement++){
for (let step = 0; step<framesPerMovement; step++){
// Create the new dstPoints (in Computer Vision applications these points will usually come from webcam detections)
for (let point = 0; point<srcPoints.length; point++){
dstPoints[point][0] += movements[movement][point][0]/framesPerMovement;
dstPoints[point][1] += movements[movement][point][1]/framesPerMovement;
}
// Update the destiny points and calculate the new warping.
myHomography.setDestinyPoints(dstPoints);
const img = myHomography.warp(); //No parameters warp will reuse the previously setted image
// Clear the canvas and draw the new image (using putImageData instead of drawImage for performance reasons)
ctx.clearRect(0, 0, w, h);
ctx.putImageData(img, Math.min(dstPoints[0][0], dstPoints[2][0]), Math.min(dstPoints[0][1], dstPoints[2][1]));
await new Promise(resolve => setTimeout(resolve, 0.1)); // Just a trick for forcing canvas to refresh
}
}
*Just take attention to the use of setSourcePoints(srcPoints)
, setImage(inputImg)
, setDestinyPoints(dstPoints)
and warp()
. The rest of code is just to generate coherent sequence of destiny points and drawing the results
Main class for performing geometrical transformations over images.
Homography is in charge of applying Affine, Projective or Piecewise Affine transformations over images, in a way that is as transparent and simple to the user as possible. It is specially intended for real-time applications. For this reason, this class keeps an internal state for avoiding redundant operations when reused, therefore, critical performance comes when multiple transformations are done over the same image.
- [transform =
"auto"
]: String representing the transformation to be done. One of"auto"
,"affine"
,"piecewiseaffine"
or"projective"
:-
"auto"
: Transformation will be automatically selected depending on the inputs given. Just take"auto"
if you don't know which kind of transform do you need. This is the default value. "affine"
: A geometrical transformation that ensures that all parallel lines of the input image will be parallel in the output image. It will need exactly three source points to be set (and three destiny points). An Affine transformation can only be composed by rotations, scales, shearings and reflections."piecewiseaffine"
: A composition of several Affine transforms that allows more complex constructions. This transforms generates a mesh of triangles with the source points and finds an independent Affine transformation for each one of them. This way, it allows more complex transformation as, for example, sinusoidal forms. It can take any amount (greater than three) of reference points. When"piecewiseaffine"
mode is selected, only the parts of the input image within a triangle will appear on the output image. If you want to ensure that the whole image appears in the output, ensure that you set reference points on each corner of the image."projective"
: A transformation that shows how the an image change when the point of view of the observer is modified. It takes exactly four source points (and four destiny points). This is the transformation that should be used when looking for perspective modifications.
-
- [width]: Optional width of the input image. If given, it will resize the input image to that width. Lower widths will imply faster transformations at the cost of lower resolution in the output image, while larger widths will produce higher resolution images at the cost of processing time. If not defined (or
null
), it will use the original image width. - [height]: Optional height of the input image. Same considerations than width.
Sets the source reference points ([[x1, y1], [x2, y2], ..., [xn, yn]]
) of the transform and, optionally, the image that will be transformed.
Source reference points is a set of 2-D coordinates determined in the input image that will exactly go to the correspondent destiny points coordinates (setted through setDstPoints()
) in the output image. The rest of coordinates of the image will be interpolated through the geometrical transform estimated from these ones.
- points : Source points of the transform, given as a
ArrayBuffer
orArray
in the form[x1, y1, x2, y2, ..., xn, yn]
or[[x1, y1], [x2, y2], ..., [xn, yn]]
. For large set of source points, performance improvements come when usingFloat32Array
. These source points can be declared in image coordinates, (x : [0, width], y : [0, height]) or in normalized coordinates (x : [0.0, 1.0], y : [0.0, 1.0]). In order to allow transforms with upscalings (from x0 to x8), normalized scale is automatically detected when the pointsArray
does not contain any value larger than 8.0. Coordinates with larger numbers are considered to be in image scale (x : [0, width], y : [0, height]). This automatic behaviour can be avoided by using the pointsAreNormalized parameter. Please note that, if width and height parameters are setted and points are given in image coordinates, these image coordinates should be declared in terms of the given width and height, instead of the original image width/height). - [image] : Optional source image, that will be warped later. Given as an
HTMLImageElement
orImageData
in the browser version or as the output ofawait loadImage('path-to-image')
in the Node.js version. Setting this element here will help to advance some calculations, improving the later warping performance. Specially when it is planned to apply multiple transformations (same source points but different destiny points) to the same image. If width and/or height are given, the image will be internally rescaled before any transformation if it is given asHTMLImageElement
(if image is given asImageData
these parameters will be ignored). - [width]: Optional width to which rescale the input image. It is equivalent to the width parameter of the constructor.
- [height]: Optional height to which rescale the input image. It is equivalent to the height parameter of the constructor.
- [pointsAreNormalized]: Optional
boolean
determining if the parameter points is in normalized or in image coordinates. If not given it will be automatically inferred from the points array.
Sets the destiny reference points ([[x1, y1], [x2, y2], ..., [xn, yn]]
) of the transform.
Destiny reference points is a set of 2-D coordinates determined for the output image. They must match with the source points, as each source points of the input image will be transformed for going exactly to its correspondent destiny points in the output image. The rest of coordinates of the image will be interpolated through the geometrical transform estimated from these correspondences.
- points : Destiny points of the transform, given as a
ArrayBuffer
orArray
in the form[x1, y1, x2, y2, ..., xn, yn]
or[[x1, y1], [x2, y2], ..., [xn, yn]]
. The amount of source points given must match with the amount of source points that should have been previously setted. - [pointsAreNormalized]: Optional
boolean
determining if the parameter points is in normalized or in image coordinates. If not given it will be automatically inferred from the pointsArray
.
Homography.setReferencePoints(srcPoints, dstPoints[, image, width, height, srcpointsAreNormalized, dstPointsAreNormalized])
This function just makes a call to Homography.setSourcePoints(srcPoints[, image, width, height, srcPointsAreNormalized)
and then Homography.setDestinyPoints(dstPoints[, dstPointsAreNormalized)
. It can be used for convenience when setting reference points for first time, but should be substituted by Homography.setSourcePoints()
or Homography.setDestinyPoints()
when performing multiple transforms where one of srcPoints or dstPoints remains unchanged, as it would decrease the overall performance.
Sets the image that will be transformed when warping.
Setting the image before the destiny points (call to setDestinyPoints()
) and the warping (call to warp()
) will help to advance some calculations as well as to avoid future redundant operations when successive calls to setDestinyPoints()->warp()
will occur in the future.
- image : Source image, that will be warped later. Given as an
HTMLImageElement
orImageData
in the browser version. If given as ImageData, width and height will not be used. In the Node.js it should be the output ofawait loadImage('path-to-image')
. - [width]: Optional width to which rescale the given image. It is equivalent to the width parameters of the constructor or
setSourcePoints()
. - [height]: Optional height to which rescale the given image. It is equivalent to the height parameters of the constructor or
setSourcePoints()
.
Apply the setted transform to an image.
Apply the homography to the given or the previously setted image and return it as ImageData
or as a Promise
. Output image will have enough width and height for enclosing the whole input image without any crop or pad once transformed. Any void section of the output image will be transparent. In case that an image is given, it will be internally setted, so any future call to warp()
receiving no image parameter will apply the transformation over this image again. Remember that it will transform the whole input image for "affine"
and "projective"
transforms, while for "piecewiseaffine"
transforms it will only transform the parts of the image that can be connected through the setted source points. It occurs because "piecewiseaffine"
transforms define different Affine transforms for different sections of the input image, so it can not calculate transforms for undefined sections. If you want the whole output image in a Piecewise Affine transform you should set a source reference point in each corner of the input image ([[x1, y1], [x2, y2], ..., [0, 0], [0, height], [width, 0], [width, height]]
).
- [image] : Image that will transformed, given as an
HTMLImageElement
. If image was already setted throughsetImage(img)
orsetSrcPoints(points, img)
, this parameter doesn't need to be given again. If an image is given, it will be internally setted, so any future call towarp()
will reuse it. When possible, this reusage of the image will improve the overall performance. - [asHTMLPromise = false] : If
true
, returns aPromise
of anHTMLImageElement
containing the output image, instead of anImageData
buffer. It could be convenient for some applications, but try to avoid it on critical performance applications as it would decrease its overall performance. If you need to draw this image on acanvas
, consider to do it directly throughcontext.putImageData(imgData, x, y)
.
This function will return the transformed image, without any pad or crop in format ImageData
or as a Promise
of a HTMLImageElement
if asHTMLPromise was set to true
.
Apply the current Affine or Projective transform over an HTMLElement
. Applying transform to any HTMLElement
will be extremely fast.
If srcPoints and dstPoints are given, a new transform will be estimated from them. Take into account, that this function work by modifying the CSS trasform
property, so it will not work for the "piecewiseaffine"
option, as CSS does not support Piecewise Affine transforms.
- element : The
HTMLElement
to which apply the transform. It can be also anHTMLImageElement
. In this case, the difference withwarp()
will be that the transformation will be not persistent, as it will be only applied over its current view (as a style) and not to its beneath image data. Usually, it is enough if the image does not need to be drawn in acanvas
or to suffer subsequent transformations. - [srcPoints] : Source points of the transform, given as a
ArrayBuffer
orArray
in the form[x1, y1, x2, y2, ..., xn, yn]
or[[x1, y1], [x2, y2], ..., [xn, yn]]
. If not given, they should have been set before throughsetSourcePoints()
. - [dstPoints] : Destiny points of the transform, also given as a
ArrayBuffer
orArray
in the form[x1, y1, x2, y2, ..., xn, yn]
or[[x1, y1], [x2, y2], ..., [xn, yn]]
. If not given, they should have been set before throughsetDestinyPoints()
.
Transforms an ImageData
object in an HTMLImageElement
. Remember that ImageData
is the output format of warp()
.
- imgData :
ImageData
object to convert. - [asPromise=true] : If
true
return aPromise
of aHTMLImageElement
, iffalse
returns directly aHTMLImageElement
. In this case, you will have to wait for theonload
event to trigger before using it.
- Image Data Warping section indicates the time for calculating the transformation matrix between a pair of Source and Destiny reference points and appling this transform over an image of size NxN. It generates a persistent ImageData object that can be directly drawn in any Canvas at a negligible computational cost, through
context.putImageData(imgData, x, y)
. - 400x400 ↦ NxN, indicates the size of the input image and the size of the expected output image. The CSS Transform Calculation section does not include this information since these sizes does not affect to its performance.
- First frame column indicates the time for calculating a single image warping, while Rest of Frames column indicates the time for calculating each one of multiple different warpings on the same input image. Frame Rate (1/Rest of Frames) indicates the amount of frames that can be calculated per second.
- You can test the concrete performance of your objective device just by executing the benchmark.html. Take into account that this execution can take some minutes, since it executes 2,000 frames for each single warping experiment, and 200,000 for each CSS experiment.
Performance tests on an Average Desktop PC.
Intel Core i5-7500 Quad-Core. Chrome 92.0.4515.107. Windows 10. | |||||||||
---|---|---|---|---|---|---|---|---|---|
Image Data Warping | |||||||||
400x400 ↦ 200x200 | 400x400 ↦ 400x400 | 400x400 ↦ 800x800 | |||||||
Transform | First Frame | Rest of Frames | Frame Rate | First Frame | Rest of Frames | Frame Rate | First Frame | Rest of Frames | Frame Rate |
Affine | 5 ms | 0.7 ms | 1,439 fps | 14 ms | 2.7 ms | 366.7 fps | 13 ms | 10.8 ms | 92.6 fps |
Projective | 6 ms | 1.9 ms | 527.4 fps | 21 ms | 7.2 ms | 139.7 fps | 30 ms | 27.5 ms | 36.3 fps |
Piecewise Aff. (2 Triangles) | 7 ms | 1.1 ms | 892.9 fps | 19 ms | 4.4 ms | 227.9 fps | 40 ms | 16.5 ms | 60.6 fps |
Piecewise Aff. (360 Tri.) | 26 ms | 2.1 ms | 487 fps | 21 ms | 4.6 ms | 216.1 fps | 41 ms | 22.4 ms | 44.6 fps |
Piecewise Aff. (~23,000 Tri.) | 257 ms | 24.3 ms | 41.2 fps | 228 ms | 11.5 ms | 87.1 fps | 289 ms | 62 ms | 16.1 fps |
CSS Transform Calculation | |||||||||
Transform | First Frame | Rest of Frames | Frame Rate | ||||||
Affine | 4 ms | 0.00014 ms | 1,696,136.44 fps | ||||||
Projective | 4 ms | 0.016 ms | 61,650.38 fps |
Performance tests on a budget smartphone (a bit destroyed).
Xiaomi Redmi Note 5. Chrome 92.0.4515.115. Android 8.1.0 | |||||||||
---|---|---|---|---|---|---|---|---|---|
Image Data Warping | |||||||||
400x400 ↦ 200x200 | 400x400 ↦ 400x400 | 400x400 ↦ 800x800 | |||||||
Transform | First Frame | Rest of Frames | Frame Rate | First Frame | Rest of Frames | Frame Rate | First Frame | Rest of Frames | Frame Rate |
Affine | 25 ms | 4.5 ms | 221.5 fps | 84 ms | 16.9 ms | 59.11 fps | 127 ms | 64.7 ms | 15.46 fps |
Projective | 38 ms | 15.5 ms | 64.4 fps | 150 ms | 56.8 ms | 17.6 fps | 232 ms | 216 ms | 4.62 fps |
Piecewise Affine (2 Triangles) | 35 ms | 8.8 ms | 113.9 fps | 316 ms | 31.7 ms | 31.6 fps | 138 ms | 118 ms | 8.5 fps |
Piecewise Aff. (360 Tri.) | 151 ms | 14.3 ms | 70 fps | 138 ms | 30.2 ms | 33 fps | 274 ms | 149 ms | 6.7 fps |
Piecewise Aff. (~23,000 Tri.) | 1.16 s | 162 ms | 6.15 fps | 1.16 s | 75 ms | 13.3 fps | 1.47 s | 435 ms | 2.3 fps |
CSS Transform Calculation | |||||||||
Transform | First Frame | Rest of Frames | Frame Rate | ||||||
Affine | 21 ms | 0.0104 ms | 96,200.10 fps | ||||||
Projective | 22 ms | 0.025 ms | 40,536.71 fps |