Vectorly AI Filters Library

Vectorly AI Filters Library

Class

BackgroundFilterCore

new BackgroundFilterCore()

Create a new BackgroundFilterCore instance

Methods

async

changeBackground()

async changeBackground - Change the background used in the filter

Parameters

  • https://files.vectorly.io/demo/videocall/virtual-background.png string | HTMLImageElement | HTMLCanvasElement | ImageBitmap | ImageData <optional>

    For background blur, provide the string "blur".
    For Virtual Background Images, provide the String of URL of background image to use, or any type of Image source supported by createImageBitmap
    For transparent background provide the string "transparent"

async

getOutputBitmap() → {MediaStream}

async getOutputBitmap - Get the output Bitmap

Returns

  • MediaStream

    Outputs a bitmap with the filtered frame

async

load(params)

load Background Filter Module

Parameters

  • params object

    Properties

    • token string <optional>

      Token used to fetch models from server; Signup on Vectorly dashboard to get the token

    • model string <optional>

      Model to use. Options are
      "selfie" : Mediapipe segmentation
      "selfie_v2": Mediapipe segmentation version 2
      "webgl" : WebGL implementation
      "webgl_v2" : WebGL implementation version 2

    • background string | HTMLImageElement | HTMLCanvasElement | ImageBitmap | ImageData <optional>
      https://files.vectorly.io/demo/videocall/virtual-background.png

      For background blur, provide the string "blur".
      For Virtual Background Images, provide the String of URL of background image to use, or any type of Image source supported by createImageBitmap
      For transparent background provide the string "transparent"

    • canvas HTMLCanvasElement

      HTML canvas element where the output is rendered to

    • inputSize object <optional>
      {w: 100, h: 100}

      Input MediaElement size; object with w & h parameters; Can be changed using setInputSize

    • blurRadius number <optional>
      5

      Value of blur radius to use, typically set as a value between [1, 10];

    • frameRate number <optional>
      30

      Framerate used for running the virtual background filter

    • segmentationFrameRate number <optional>
      15

      Target frame rate for running segmentation

    • passthrough boolean <optional>
      false

      If set to true; calling disable will pass the input directly through to the output MediaStream so that you can call disable/enable without changing the output MediaStream object. Default is false, in which case the output MediaStream stops when disable is called. segmentationFrameRate is set to the nearest

    • debug boolean <optional>
      false
async

render()

Run the model on the currently set input image. The input image can be set using setInput() functions

async

setInput(element)

Set input texture for the network inference

Parameters

  • element HTMLVideoElement | HTMLImageElement | HTMLCanvasElement

    Input element which can be a video or image or canvas. Refer to pixels param in texImage2D for all supported types

async

setInputSize(inputSize)

Updates the input texture size of the image to be upscaled as well as the canvas size, if the upscaled image size is larger then the current canvas.

Parameters

  • inputSize *

    Properties

    • w int

      width of the image

    • h int

      height of the image