new vectorlyUpscalerCore()
Creates an instance of vectorlyUpscalerCore.
constructor - Initialize Vectorly upscaler core to provide more control on the upscale render cycle.
This class is available as vectorlyUpscaler.core
from vectorly-core.js
build of the library
Examples
<script src="https://cdn.vectorly.io/v2/latest/vectorly-core.js"> </script>
<canvas id="my-canvas" width="1280" height="720"> </canvas>
<image id="my-image" src="your-image-url"> </image>
<script>
// Instantiate the upscaler configs
const imageElem = document.getElementById("my-image")
const config = {
w: imageElem.naturalWidth,
h: imageElem. naturalHeight,
renderSize: {w: imageElem.naturalWidth*2, h: imageElem.naturalHeight*2},
canvas: document.getElementById("my-canvas"),
networkParams: {name: 'residual_5k_2x', tag: 'general', version: '0'},
token: "insert-token"
}
// Instantiate the upscaler object
const upscaler = vectorlyUpscaler.core()
//load the network
upscaler.load(config)
upscaler.setInput(imageElem) // Sets input element
upscaler.render() // Renders to canvas
</script>
import vectorlyUpscaler from '@vectorly-io/ai-upscaler/core'
const outCanvas = document.createElement('canvas');
outCanvas.id = "my-canvas"
document.body.appendChild(outCanvas); // Add wherever the canvas goes
const imageElem = document.createElement('image');
imageElem.src = "your-image-path"
// Initialize same as before
const config = {
...
}
// Instantiate the upscaler object
const upscaler = new vectorlyUpscaler.core();
upscaler.load(config)
// render the image
requestAnimationFrame( () => {
upscaler.setInput(imageElem) // Set the input image
upscaler.render(); // Render the image
});
import vectorlyUpscaler from '@vectorly-io/ai-upscaler/core'
// Initialize same as before but this time with video width and height
const config = {
...
}
const upscaler = new vectorlyUpscaler.core();
upscaler.load(config);
const videoElem = document.createElement('video'); // create video element
videoElem.src = "your-video-path";
const inputSize = {h: 360, w: 640};
// Call updateInputResolution whenever input size is changed
upscaler.updateInputResolution(inputSize)
// Let's say we can to scale by 1.5 times along x and 1.75 times along y
const renderSize = {w: inputSize.w*(1.75), h: inputSize.h*(1.25)}
// Call updateRenderResolution whenever desired output size is changed
upscaler.updateRenderResolution(renderSize)
videoElem.play();
function draw() {
upscaler.setInput(videoElem)
upscaler.render();
if(videoElem.ended){ return;}
requestAnimationFrame(() => {draw()});
}
requestAnimationFrame(() => {draw()});
Methods
createCanvas(params) → {HTMLCanvasElement}
Initialise the canvas tag with tag id defined in params.id
object
If it is not passed then it is initialised randomly.
Parameters
-
params
object
Properties
-
w
int
<optional>
100Width of the canvas
-
h
int
<optional>
100Height of the canvas
-
id
string
<optional>
"canv-" + uuidv4()ID of the canvas element created
-
w
Returns
-
HTMLCanvasElement
- The canvas element that has been created
importNetwork(frameBuffer, networkParams, networkOptions) → {BaseNeuralNetwork}
Initialises upscaler network
Parameters
-
frameBuffer
FrameBuffer
FrameBuffer object
-
networkParams
NetworkParams
Upscaler Network to use for upscaling
-
networkOptions
NetworkOptions
Upscaler Network settings
Returns
-
BaseNeuralNetwork
Upscaler Network
initCanvasWebGL(params) → {WebGL2RenderingContext|WebGLRenderingContext}
Return a webgl context based on the config
Parameters
-
params
object
Properties
-
w
int
Input element (video/image/canvas) width
-
h
int
Input element height
-
float_type
string
<optional>
"float16"floating point precision. available: "float32"/"float16")
-
use_webgl1
string
<optional>
"false"flag to use webgl1 instead of webgl1. if set to "true", remember to pass it as string)
-
w
Returns
-
WebGL2RenderingContext
WebGLRenderingContext
- Returns a WebGL Context
initFrameBufferWebGL(gl) → {FrameBuffer}
Returns a newly initialised FrameBuffer object
Parameters
-
gl
WebGLContext
Webgl context
Returns
-
FrameBuffer
FrameBuffer Object
initRenderer(network, frameBuffer, renderSize) → {Renderer}
Initialize Renderer object
Parameters
-
network
BaseNeuralNetwork
Upscale network
-
frameBuffer
FrameBuffer
FrameBuffer Object
-
renderSize
*
output render window size {w: renderWidth, h: renderHeight}
Properties
-
w
int
video/image width
-
h
int
video/image height
-
w
Returns
-
Renderer
Renderer Object
load(config, networkOptions)
Parameters
-
config
object
The following configurations are required
Properties
-
token
string
required; Token used to fetch models from server; Signup on Upscaler dashboard to get the token
-
w
int
Input element width
-
h
int
Input Element height
-
renderSize
*
Properties
-
w
int
Desired output element (video/image/canvas) render width
-
h
int
Desired output element (video/image/canvas) render height
-
w
-
networkParams
NetworkParams
Upscaler Network to use for upscaling
-
token
-
networkOptions
NetworkOptions
Upscaler Network settings
-
config.canvas
HTMLCanvasElement
HTML canvas element where the upscaled output is rendered to
-
config.float_type
string
<optional>
"float16"floating point precision required available options ( available: "float32"/"float16")
-
config.use_webgl1
string
<optional>
"false"flag to use webgl instead of webgl2 (set to "true" if you want to use webgl(1.0), remember to pass it as string)
on(event, callback)
on - Register event listener
Event listeners can also be chained as shown in example
Parameters
-
event
string
-
callback
function
Function to be called on event fired
Example
upscaler
.on('load', function () {
console.log("Upscaler initialized"); })
.on('error', function () {
console.log("Failed to initialize"); })
.on('start', function () {
console.log("Starting upscaling"); })
.on('stop', function () {
console.log("Stopping upscaling"); })
Listens to events
render()
Run the model on the currently set input image. The input image can be set using setInput() functions
setInput(element)
Set input texture for the network inference
Parameters
-
element
HTMLVideoElement
|HTMLImageElement
|HTMLCanvasElement
Input element which can be a video or image or canvas. Refer to pixels param in texImage2D for all supported types
updateInputResolution(InputSize)
Updates the input texture size of the image to be upscaled as well as the canvas size, if the upscaled image size is larger then the current canvas.
Parameters
-
InputSize
*
Properties
-
w
int
width of the image
-
h
int
height of the image
-
w
updateRenderResolution(renderSize)
Updates the output render resolution of the image as well as the canvas size if the updated render size is larger then the current canvas.
Parameters
-
renderSize
*
width and height of the render window. If not passed then render window resolution is not changed.
Properties
-
w
int
width of the render window
-
h
int
height of the render window
-
w