Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge feature/virtual_background_cross_browser with develop. #1991

Merged
merged 12 commits into from
Mar 21, 2023
37 changes: 37 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,43 @@ The Twilio Programmable Video SDKs use [Semantic Versioning](http://www.semver.o

**Version 1.x reached End of Life on September 8th, 2021.** See the changelog entry [here](https://www.twilio.com/changelog/end-of-life-complete-for-unsupported-versions-of-the-programmable-video-sdk). Support for the 1.x version ended on December 4th, 2020.

2.27.0 (March 21, 2023)
=======================

Changes
-------

`VideoTrack.addProcessor` now works on browsers that support `OffscreenCanvas` as well as `HTMLCanvasElement`. When used with
[@twilio/video-processors v2.0.0](https://github.com/twilio/twilio-video-processors.js/blob/2.0.0/CHANGELOG.md), the Virtual
Background feature will work on browsers that supports [WebGL2](https://developer.mozilla.org/en-US/docs/Web/API/WebGL2RenderingContext).
See [VideoTrack.addProcessor](https://sdk.twilio.com/js/video/releases/2.27.0/docs/VideoTrack.html#addProcessor__anchor) and
[@twilio/video-processors v2.0.0](https://github.com/twilio/twilio-video-processors.js/blob/2.0.0/CHANGELOG.md) for details.

### Example

```ts
import { createLocalVideoTrack } from 'twilio-video';
import { Pipeline, VirtualBackgroundProcessor } from '@twilio/video-processors';

const virtualBackgroundProcessor = new VirtualBackgroundProcessor({
pipeline: Pipeline.WebGL2,
// ...otherOptions
});

await virtualBackgroundProcessor.loadModel();

const videoTrack = await createLocalVideoTrack({
width: 640,
height: 480,
frameRate: 24
});

videoTrack.addProcessor(processor, {
inputFrameBufferType: 'video',
outputFrameBufferContextType: 'webgl2',
});
```

2.26.2 (February 21, 2023)
==========================

Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ Releases of twilio-video.js are hosted on a CDN, and you can include these
directly in your web app using a <script> tag.

```html
<script src="//sdk.twilio.com/js/video/releases/2.26.2/twilio-video.min.js"></script>
<script src="//sdk.twilio.com/js/video/releases/2.27.0/twilio-video.min.js"></script>
```

Using this method, twilio-video.js will set a browser global:
Expand Down
6 changes: 4 additions & 2 deletions lib/media/track/localmediatrack.js
Original file line number Diff line number Diff line change
Expand Up @@ -308,8 +308,10 @@ function restartWhenInadvertentlyStopped(localMediaTrack) {
}).catch(error => {
log.warn('Failed to detect silence:', error);
}).finally(() => {
// Pause the dummy element again.
el.pause();
// Pause the dummy element again, if there is no processed track.
if (!localMediaTrack.processedTrack) {
el.pause();
}
});
}

Expand Down
7 changes: 4 additions & 3 deletions lib/media/track/localvideotrack.js
Original file line number Diff line number Diff line change
Expand Up @@ -100,8 +100,8 @@ class LocalVideoTrack extends LocalMediaVideoTrack {

/**
* Add a {@link VideoProcessor} to allow for custom processing of video frames belonging to a VideoTrack.
* Only Chrome supports this as of now. Calling this API from a non-supported browser will result in a log warning.
* @param {VideoProcessor} processor - The {@link VideoProcessor} to use.
* @param {AddProcessorOptions} [options] - {@link AddProcessorOptions} to provide.
* @returns {this}
* @example
* class GrayScaleProcessor {
Expand Down Expand Up @@ -315,9 +315,10 @@ function workaroundSilentLocalVideo(localVideoTrack, doc) {
}).catch(error => {
log.warn('Failed to detect silence and restart:', error);
}).finally(() => {
// If silent frames were not detected, then pause the dummy element again.
// If silent frames were not detected, then pause the dummy element again,
// if there is no processed track.
el = localVideoTrack._dummyEl;
if (el && !el.paused) {
if (el && !el.paused && !localVideoTrack.processedTrack) {
el.pause();
}

Expand Down
2 changes: 1 addition & 1 deletion lib/media/track/remotevideotrack.js
Original file line number Diff line number Diff line change
Expand Up @@ -261,8 +261,8 @@ class RemoteVideoTrack extends RemoteMediaVideoTrack {
* Add a {@link VideoProcessor} to allow for custom processing of video frames belonging to a VideoTrack.
* When a Participant un-publishes and re-publishes a VideoTrack, a new RemoteVideoTrack is created and
* any VideoProcessors attached to the previous RemoteVideoTrack would have to be re-added again.
* Only Chrome supports this as of now. Calling this API from a non-supported browser will result in a log warning.
* @param {VideoProcessor} processor - The {@link VideoProcessor} to use.
* @param {AddProcessorOptions} [options] - {@link AddProcessorOptions} to provide.
* @returns {this}
* @example
* class GrayScaleProcessor {
Expand Down
14 changes: 11 additions & 3 deletions lib/media/track/videoprocessoreventobserver.js
Original file line number Diff line number Diff line change
Expand Up @@ -76,11 +76,19 @@ class VideoProcessorEventObserver extends EventEmitter {
return {};
}

const { processor, captureHeight, captureWidth, inputFrameRate, isRemoteVideoTrack } = this._processorInfo;
const data = { captureHeight, captureWidth, inputFrameRate, isRemoteVideoTrack };
const {
processor,
captureHeight,
captureWidth,
inputFrameRate,
isRemoteVideoTrack,
inputFrameBufferType,
outputFrameBufferContextType
} = this._processorInfo;
const data = { captureHeight, captureWidth, inputFrameRate, isRemoteVideoTrack, inputFrameBufferType, outputFrameBufferContextType };
data.name = processor._name || 'VideoProcessor';

['assetsPath', 'blurFilterRadius', 'fitType', 'isSimdEnabled', 'maskBlurRadius', 'version'].forEach(prop => {
['assetsPath', 'blurFilterRadius', 'debounce', 'fitType', 'isSimdEnabled', 'maskBlurRadius', 'pipeline', 'version'].forEach(prop => {
const val = processor[`_${prop}`];
if (typeof val !== 'undefined') {
data[prop] = val;
Expand Down
114 changes: 92 additions & 22 deletions lib/media/track/videotrack.js
Original file line number Diff line number Diff line change
Expand Up @@ -54,6 +54,10 @@ class VideoTrack extends MediaTrack {
value: null,
writable: true,
},
_processorOptions: {
value: {},
writable: true,
},
_unmuteHandler: {
value: null,
writable: true
Expand Down Expand Up @@ -152,27 +156,31 @@ class VideoTrack extends MediaTrack {
const { width = 0, height = 0 } = this.mediaStreamTrack.getSettings();
// Setting the canvas' dimension triggers a redraw.
// Only set it if it has changed.
if (this._inputFrame.width !== width) {
this._inputFrame.width = width;
this._inputFrame.height = height;

if (this._outputFrame) {
this._outputFrame.width = width;
this._outputFrame.height = height;
if (this._outputFrame && this._outputFrame.width !== width) {
this._outputFrame.width = width;
this._outputFrame.height = height;
}
if (this._inputFrame) {
if (this._inputFrame.width !== width) {
this._inputFrame.width = width;
this._inputFrame.height = height;
}
this._inputFrame.getContext('2d').drawImage(this._dummyEl, 0, 0, width, height);
}
this._inputFrame.getContext('2d').drawImage(this._dummyEl, 0, 0, width, height);

let result = null;
try {
result = this.processor.processFrame(this._inputFrame, this._outputFrame);
const input = this._processorOptions.inputFrameBufferType === 'video' ? this._dummyEl : this._inputFrame;
result = this.processor.processFrame(input, this._outputFrame);
} catch (ex) {
this._log.debug('Exception detected after calling processFrame.', ex);
}
((result instanceof Promise) ? result : Promise.resolve(result))
.then(() => {
if (this._outputFrame) {
this.processedTrack.requestFrame();
if (typeof this.processedTrack.requestFrame === 'function') {
this.processedTrack.requestFrame();
}
this._processorEventObserver.emit('stats');
}
})
Expand Down Expand Up @@ -216,8 +224,9 @@ class VideoTrack extends MediaTrack {
_restartProcessor() {
const processor = this.processor;
if (processor) {
const processorOptions = Object.assign({}, this._processorOptions);
this.removeProcessor(processor);
this.addProcessor(processor);
this.addProcessor(processor, processorOptions);
}
}

Expand All @@ -235,8 +244,8 @@ class VideoTrack extends MediaTrack {

/**
* Add a {@link VideoProcessor} to allow for custom processing of video frames belonging to a VideoTrack.
* Only Chrome supports this as of now. Calling this API from a non-supported browser will result in a log warning.
* @param {VideoProcessor} processor - The {@link VideoProcessor} to use.
* @param {AddProcessorOptions} [options] - {@link AddProcessorOptions} to provide.
* @returns {this}
* @example
* class GrayScaleProcessor {
Expand All @@ -254,10 +263,7 @@ class VideoTrack extends MediaTrack {
* videoTrack.addProcessor(new GrayScaleProcessor(100));
* });
*/
addProcessor(processor) {
if (typeof OffscreenCanvas !== 'function') {
return this._log.warn('Adding a VideoProcessor is not supported in this browser.');
}
addProcessor(processor, options) {
if (!processor || typeof processor.processFrame !== 'function') {
throw new Error('Received an invalid VideoProcessor from addProcessor.');
}
Expand All @@ -284,13 +290,51 @@ class VideoTrack extends MediaTrack {
this.mediaStreamTrack.addEventListener('unmute', this._unmuteHandler);
}

this._processorOptions = options || {};
let { inputFrameBufferType, outputFrameBufferContextType } = this._processorOptions;
if (typeof OffscreenCanvas === 'undefined' && inputFrameBufferType === 'offscreencanvas') {
throw new Error('OffscreenCanvas is not supported by this browser.');
}
if (inputFrameBufferType && inputFrameBufferType !== 'video' && inputFrameBufferType !== 'canvas' && inputFrameBufferType !== 'offscreencanvas') {
throw new Error(`Invalid inputFrameBufferType of ${inputFrameBufferType}`);
}
if (!inputFrameBufferType) {
inputFrameBufferType = typeof OffscreenCanvas === 'undefined' ? 'canvas' : 'offscreencanvas';
}

const { width = 0, height = 0, frameRate = DEFAULT_FRAME_RATE } = this.mediaStreamTrack.getSettings();
this._inputFrame = new OffscreenCanvas(width, height);
if (inputFrameBufferType === 'offscreencanvas') {
this._inputFrame = new OffscreenCanvas(width, height);
}
if (inputFrameBufferType === 'canvas') {
this._inputFrame = document.createElement('canvas');
}
if (this._inputFrame) {
this._inputFrame.width = width;
this._inputFrame.height = height;
}

this._outputFrame = document.createElement('canvas');
this._outputFrame.width = width;
this._outputFrame.height = height;

this.processedTrack = this._outputFrame.captureStream(0).getTracks()[0];
// NOTE(csantos): Initialize the rendering context for future renders. This also ensures
// that the correct type is used and on Firefox, it throws an exception if you try to capture
// frames prior calling getContext https://bugzilla.mozilla.org/show_bug.cgi?id=1572422
outputFrameBufferContextType = outputFrameBufferContextType || '2d';
const ctx = this._outputFrame.getContext(outputFrameBufferContextType);
if (!ctx) {
throw new Error(`Cannot get outputFrameBufferContextType: ${outputFrameBufferContextType}.`);
}

// NOTE(csantos): Zero FPS means we can control when to render the next frame by calling requestFrame.
// Some browsers such as Firefox doesn't support requestFrame so we will use default, which is an undefined value.
// This means, the browser will use the highest FPS available.
const targetFps = typeof CanvasCaptureMediaStreamTrack !== 'undefined' && CanvasCaptureMediaStreamTrack.prototype &&
// eslint-disable-next-line
typeof CanvasCaptureMediaStreamTrack.prototype.requestFrame === 'function' ? 0 : undefined;

this.processedTrack = this._outputFrame.captureStream(targetFps).getTracks()[0];
this.processedTrack.enabled = this.mediaStreamTrack.enabled;
this.processor = processor;

Expand All @@ -299,7 +343,9 @@ class VideoTrack extends MediaTrack {
captureHeight: height,
captureWidth: width,
inputFrameRate: frameRate,
isRemoteVideoTrack: this.toString().includes('RemoteVideoTrack')
isRemoteVideoTrack: this.toString().includes('RemoteVideoTrack'),
inputFrameBufferType,
outputFrameBufferContextType
});
this._updateElementsMediaStreamTrack();
this._captureFrames();
Expand Down Expand Up @@ -440,13 +486,12 @@ class VideoTrack extends MediaTrack {
this._log.debug('Removing VideoProcessor from the VideoTrack', processor);
clearTimeout(this._captureTimeoutId);
this.mediaStreamTrack.removeEventListener('unmute', this._unmuteHandler);
this._processorOptions = {};
this._unmuteHandler = null;
this._isCapturing = false;

this.processor = null;
this.processedTrack = null;
this._inputFrame.getContext('2d').clearRect(0, 0, this._inputFrame.width, this._inputFrame.height);
this._outputFrame.getContext('2d').clearRect(0, 0, this._outputFrame.width, this._outputFrame.height);
this._inputFrame = null;
this._outputFrame = null;

Expand Down Expand Up @@ -484,7 +529,7 @@ function dimensionsChanged(track, elem) {
* Any exception raised (either synchronously or asynchronously) in `processFrame` will result in the frame being dropped.
* This callback has the following signature:<br/><br/>
* <code>processFrame(</code><br/>
* &nbsp;&nbsp;<code>inputFrameBuffer: OffscreenCanvas,</code><br/>
* &nbsp;&nbsp;<code>inputFrameBuffer: OffscreenCanvas | HTMLCanvasElement | HTMLVideoElement,</code><br/>
* &nbsp;&nbsp;<code>outputFrameBuffer: HTMLCanvasElement</code><br/>
* <code>): Promise&lt;void&gt; | void;</code>
*
Expand All @@ -501,6 +546,31 @@ function dimensionsChanged(track, elem) {
* }
*/

/**
* Possible options to provide to {@link LocalVideoTrack#addProcessor} and {@link RemoteVideoTrack#addProcessor}.
* @typedef {object} AddProcessorOptions
* @property {string} [inputFrameBufferType="offscreencanvas"] - This option allows you to specify what kind of input you want to receive in your
* Video Processor. The default is `offscreencanvas` and will fallback to a regular `canvas` if the browser does not support it.
* Possible values include the following.
* <br/>
* <br/>
* `offscreencanvas` - Your Video Processor will receive an [OffscreenCanvas](https://developer.mozilla.org/en-US/docs/Web/API/OffscreenCanvas)
* which is good for canvas-related processing that can be rendered off screen.
* <br/>
* <br/>
* `canvas` - Your Video Processor will receive an [HTMLCanvasElement](https://developer.mozilla.org/en-US/docs/Web/API/HTMLCanvasElement).
* This is recommended on browsers that doesn't support `OffscreenCanvas`, or if you need to render the frame on the screen.
* <br/>
* <br/>
* `video` - Your Video Processor will receive an [HTMLVideoElement](https://developer.mozilla.org/en-US/docs/Web/API/HTMLVideoElement).
* Use this option if you are processing the frame using WebGL or if you only need to [draw](https://developer.mozilla.org/en-US/docs/Web/API/CanvasRenderingContext2D/drawImage)
* the frame directly to your output canvas.
* @property {string} [outputFrameBufferContextType="2d"] - The SDK needs the [context type](https://developer.mozilla.org/en-US/docs/Web/API/HTMLCanvasElement/getContext)
* that your Video Processor uses in order to properly generate the processed track. For example, if your Video Processor uses WebGL2 (`canvas.getContext('webgl2')`),
* you should set `outputFrameBufferContextType` to `webgl2`. Or if you're using Canvas 2D processing (`canvas.getContext('2d')`),
* you should set `outputFrameBufferContextType` to `2d`.
*/

/**
* The {@link VideoTrack}'s dimensions changed.
* @param {VideoTrack} track - The {@link VideoTrack} whose dimensions changed
Expand Down
2 changes: 1 addition & 1 deletion package.json
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
"name": "twilio-video",
"title": "Twilio Video",
"description": "Twilio Video JavaScript Library",
"version": "2.26.3-dev",
"version": "2.27.0-dev",
"homepage": "https://twilio.com",
"author": "Mark Andrus Roberts <mroberts@twilio.com>",
"contributors": [
Expand Down
14 changes: 11 additions & 3 deletions test/unit/spec/media/track/videoprocessoreventobserver.js
Original file line number Diff line number Diff line change
Expand Up @@ -23,10 +23,12 @@ describe('VideoProcessorEventObserver', () => {
processor = {
_assetsPath: '/virtualbackground/',
_blurFilterRadius: 15,
_debounce: true,
_fitType: 'Cover',
_isSimdEnabled: true,
_maskBlurRadius: 5,
_name: 'VirtualBackgroundProcessor',
_pipeline: 'WebGL2',
_version: '1.0.0',
_benchmark: getBenchmark(1)
};
Expand All @@ -35,7 +37,9 @@ describe('VideoProcessorEventObserver', () => {
captureHeight: 720,
captureWidth: 1280,
inputFrameRate: 24,
isRemoteVideoTrack: false
isRemoteVideoTrack: false,
inputFrameBufferType: 'video',
outputFrameBufferContextType: 'webgl2'
};

processorInfo = Object.assign({ processor }, captureInfo);
Expand All @@ -45,12 +49,16 @@ describe('VideoProcessorEventObserver', () => {
blurFilterRadius: processor._blurFilterRadius,
captureHeight: captureInfo.captureHeight,
captureWidth: captureInfo.captureWidth,
debounce: processor._debounce.toString(),
fitType: processor._fitType,
inputFrameRate: captureInfo.inputFrameRate,
inputFrameBufferType: captureInfo.inputFrameBufferType,
outputFrameBufferContextType: captureInfo.outputFrameBufferContextType,
isRemoteVideoTrack: captureInfo.isRemoteVideoTrack.toString(),
isSimdEnabled: processor._isSimdEnabled.toString(),
maskBlurRadius: processor._maskBlurRadius,
name: processor._name,
pipeline: processor._pipeline,
version: processor._version
};

Expand Down Expand Up @@ -245,8 +253,8 @@ describe('VideoProcessorEventObserver', () => {
describe('event data', () => {
it('should have correct data for custom processor', () => {
observer.emit('add', Object.assign({ processor: {} }, captureInfo));
const { captureHeight, captureWidth, inputFrameRate, isRemoteVideoTrack } = eventData;
const expected = { name: 'VideoProcessor', captureHeight, captureWidth, inputFrameRate, isRemoteVideoTrack };
const { captureHeight, captureWidth, inputFrameRate, isRemoteVideoTrack, inputFrameBufferType, outputFrameBufferContextType } = eventData;
const expected = { name: 'VideoProcessor', captureHeight, captureWidth, inputFrameRate, isRemoteVideoTrack, inputFrameBufferType, outputFrameBufferContextType };
sinon.assert.calledOnce(listener);
sinon.assert.calledWithExactly(listener, { name: 'add', data: expected });
});
Expand Down
Loading