Data Types

Video Processing API

Scalable video REST API to convert, resize, optimize, and compress videos for the web.

https://upcdn.io
Account
/W142hJk/
API
video
File Path
/example.mp4
Parameters
?h=1080

1 Upload your video

Firstly, your video must be uploaded or accessible to Bytescale:

Icon

Use the Bytescale Dashboard to upload a video manually.

Icon

Use the Upload Widget, Bytescale SDKs or Bytescale API to upload a video programmatically.

Icon

Use our external storage options to process external videos.

2 Build your video URL

Build a video processing URL:

2a

Get the raw URL for your file:

https://upcdn.io/W142hJk/raw/example.mp4

2b

Replace "raw" with "video":

https://upcdn.io/W142hJk/video/example.mp4

2c

Add querystring parameters to control the output:

https://upcdn.io/W142hJk/video/example.mp4?h=1080

3 Watch your video

Watch your video by navigating to the URL from step 2.

By default, your video will be encoded to H.264 at 30 FPS using the input video's dimensions.

The default HTTP response will be an HTML webpage with an embedded video player. This is for debug purposes only: developers are expected to override this behavior by specifying an f option when embedding videos into their webpages and apps.

Example #1: Embedding a video

To embed a video in a webpage using Video.js:

<!DOCTYPE html>
<html>
<head>
<link href="https://unpkg.com/video.js@7/dist/video-js.min.css" rel="stylesheet">
<script src="https://unpkg.com/video.js@7/dist/video.min.js"></script>
<style type="text/css">
.video-container {
height: 316px;
max-width: 600px;
}
</style>
</head>
<body>
<div class="video-container">
<video-js
class="vjs-fill vjs-big-play-centered"
controls
preload="auto"
poster="https://upcdn.io/W142hJk/image/example.mp4?input=video&f=webp&f2=jpeg">
<p class="vjs-no-js">To view this video please enable JavaScript.</p>
</video-js>
</div>
<script>
var vid = document.querySelector('video-js');
var player = videojs(vid, {responsive: true});
player.on('loadedmetadata', function() {
// Begin playing from the start of the video. (Required for 'f=hls-h264-rt'.)
player.currentTime(player.seekable().start(0));
});
player.src({
src: 'https://upcdn.io/W142hJk/video/example.mp4!f=hls-h264-rt&h=480&h=1080',
type: 'application/x-mpegURL'
});
</script>
</body>
</html>

The f=hls-h264-rt output format is designed to reduce the wait time for your viewers when the given video has not been transcoded before. Like the other output formats, this video format incurs an initial delay while transcoding starts. However, unlike the other formats, once transcoding begins the video will be streamed to viewers during transcoding. As with the other formats, once transcoded, the resulting video will be cached and will not need to be transcoded again.

Example #2: Creating video thumbnails

To create a video thumbnail (also known as a "video poster image"):

1

Replace /raw/ with /image/ in your video's URL.

2

Use the Image Processing API's querystring parameters to customize your video's thumbnail.

Example #3: Creating MP4 videos most popular

MP4 AVC/H.264 output (f=mp4-h264) is the cheapest, fastest, and simplest option for video transcoding.

To create an MP4 video file:

2

Replace /raw/ with /video/ in the video's URL, and then append ?f=mp4-h264 to the URL.

3

Navigate to the URL (i.e. request the URL using a simple GET request).

4

Wait for status: "Succeeded" in the JSON response.

5

The result will contain a URL to the MP4 video:

https://upcdn.io/W142hJk/video/example.mp4?f=mp4-h264
{
"jobUrl": "https://api.bytescale.com/v2/accounts/W142hJk/jobs/ProcessFileJob/01H3211XMV1VH829RV697VE3WM",
"jobDocs": "https://www.bytescale.com/docs/job-api/GetJob",
"jobId": "01H3211XMV1VH829RV697VE3WM",
"jobType": "ProcessFileJob",
"accountId": "W142hJk",
"created": 1686916626075,
"lastUpdated": 1686916669389,
"status": "Succeeded",
"summary": {
"result": {
"type": "Artifact",
"artifact": "/video.mp4",
"artifactUrl": "https://upcdn.io/W142hJk/video/example.mp4!f=mp4-h264&a=/video.mp4"
}
}
}

Example #4: Creating WebM videos

WebM output (f=webm-vp8 and f=webm-vp9) takes considerably longer to transcode than MP4 and HLS. However, VP9 may offer a better video quality to file size ratio than MP4 and HLS encoded with H.264. Like H.264, VP9 is supported by most browsers.

To create a WebM video file:

2

Replace /raw/ with /video/ in the video's URL, and then append ?f=webm-vp9 to the URL.

3

Navigate to the URL (i.e. request the URL using a simple GET request).

4

Wait for status: "Succeeded" in the JSON response.

5

The result will contain a URL to the WebM video:

https://upcdn.io/W142hJk/video/example.mp4?f=webm-vp9
{
"jobUrl": "https://api.bytescale.com/v2/accounts/W142hJk/jobs/ProcessFileJob/01H3211XMV1VH829RV697VE3WM",
"jobDocs": "https://www.bytescale.com/docs/job-api/GetJob",
"jobId": "01H3211XMV1VH829RV697VE3WM",
"jobType": "ProcessFileJob",
"accountId": "W142hJk",
"created": 1686916626075,
"lastUpdated": 1686916669389,
"status": "Succeeded",
"summary": {
"result": {
"type": "Artifact",
"artifact": "/video.webm",
"artifactUrl": "https://upcdn.io/W142hJk/video/example.mp4!f=webm-vp9&a=/video.webm"
}
}
}

Example #5: Creating HLS videos with multiple resolutions

HLS output (f=hls-h264 and f=hls-h265) reduces bandwidth by providing multiple bitrates (ABR) for devices to switch between. Only H.264 is widely supported by browsers.

To create an HTTP Live Streaming (HLS) video file:

2

Replace /raw/ with /video/ in the video's URL, and then append ?f=hls-h264 to the URL.

2b

You can create adaptive bitrate (ABR) videos by specifying multiple groups of resolution and/or bitrate parameters. The end-user's video player will automatically switch to the most appropriate variant during playback.

2c

You can specify up to 10 variants per video. Each variant's parameters must be adjacent on the querystring. For example: h=480&q=6&h=1080&q=8 specifies 2 variants, whereas h=480&h=1080&q=6&q=8 specifies 3 variants (which would most likely be a mistake). You can add next=true between groups of parameters to forcefully split them into separate variants.

3

Navigate to the URL (i.e. request the URL using a simple GET request).

4

Wait for status: "Succeeded" in the JSON response.

5

The result will contain a URL to the HTTP Live Streaming (HLS) video:

https://upcdn.io/W142hJk/video/example.mp4?f=hls-h264&h=480&q=6&h=1080&q=8
{
"jobUrl": "https://api.bytescale.com/v2/accounts/W142hJk/jobs/ProcessFileJob/01H3211XMV1VH829RV697VE3WM",
"jobDocs": "https://www.bytescale.com/docs/job-api/GetJob",
"jobId": "01H3211XMV1VH829RV697VE3WM",
"jobType": "ProcessFileJob",
"accountId": "W142hJk",
"created": 1686916626075,
"lastUpdated": 1686916669389,
"status": "Succeeded",
"summary": {
"result": {
"type": "Artifact",
"artifact": "/video.m3u8",
"artifactUrl": "https://upcdn.io/W142hJk/video/example.mp4!f=hls-h264&h=480&q=6&h=1080&q=8&a=/video.m3u8"
}
}
}

Example #6: Creating HLS videos with real-time transcoding

Real-time HLS output (f=hls-h264-rt and f=hls-h265-rt) creates the same output video as regular HLS (f=hls-h264 and f=hls-h265) except the video is returned while it's being transcoded. This option is only recommended if video playback is required very shortly after uploading the input video; otherwise, regular HLS is advised for its simpler asynchronous jobs. Only H.264 is widely supported by browsers.

To create an HTTP Live Streaming (HLS) video with real-time transcoding:

1

Complete the steps from creating an HLS video.

2

Replace f=hls-h264 with f=hls-h264-rt.

3

The result will be an M3U8 file that's dynamically updated as new segments finish transcoding:

https://upcdn.io/W142hJk/video/example.mp4?f=hls-h264-rt
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-INDEPENDENT-SEGMENTS
#EXT-X-STREAM-INF:BANDWIDTH=2038521,AVERAGE-BANDWIDTH=2038521,CODECS="avc1.4d4032,mp4a.40.2",RESOLUTION=2048x1080,FRAME-RATE=30.000
example.mp4!f=hls-h264-rt&a=/0f/manifest.m3u8

Example #7: Extracting video metadata

The Video Metadata API allows you to extract the video's duration, resolution, frame rate, and more.

To extract a video's duration and resolution using JavaScript:

<!DOCTYPE html>
<html>
<body>
<p>Please wait, loading video metadata...</p>
<script>
async function getVideoDurationAndDimensions() {
const response = await fetch("https://upcdn.io/W142hJk/video/example.mp4?f=meta");
const jsonData = await response.json();
const videoTrack = (jsonData.tracks ?? []).find(x => x.type === "Video");
if (videoTrack === undefined) {
alert("Cannot find video metadata.")
}
else {
alert([
`Duration (seconds): ${videoTrack.duration}`,
`Width (pixels): ${videoTrack.width}`,
`Height (pixels): ${videoTrack.height}`,
].join("\n"))
}
}
getVideoDurationAndDimensions().then(() => {}, e => alert(`Error: ${e}`))
</script>
</body>
</html>

Supported Inputs

The Video Processing API can generate video outputs from the following input file types:

File Extension(s)Video ContainerVideo Codecs

.gif

No Container

GIF 87a, GIF 89a

.m2v, .mpeg, .mpg

No Container

AVC (H.264), DV/DVCPRO, HEVC (H.265), MPEG-1, MPEG-2

.3g2

3G2

AVC (H.264), H.263, MPEG-4 part 2

.3gp

3GP

AVC (H.264), H.263, MPEG-4 part 2

.wmv

Advanced Systems Format (ASF)

VC-1

.flv

Adobe Flash

AVC (H.264), Flash 9 File, H.263

.avi

Audio Video Interleave (AVI)

Uncompressed, Canopus HQ, DivX/Xvid, DV/DVCPRO, MJPEG

.m3u8

HLS (MPEG-2 TS segments)

AVC (H.264), HEVC (H.265), MPEG-2

.mxf

Interoperable Master Format (IMF)

Apple ProRes, JPEG 2000 (J2K)

.mxf

Material Exchange Format (MXF)

Uncompressed, AVC (H.264), AVC Intra 50/100, Apple ProRes (4444, 4444 XQ, 422, 422 HQ, LT, Proxy), DV/DVCPRO, DV25, DV50, DVCPro HD, JPEG 2000 (J2K), MPEG-2, Panasonic P2, SonyXDCam, SonyXDCam MPEG-4 Proxy, VC-3

.mkv

Matroska

AVC (H.264), MPEG-2, MPEG-4 part 2, PCM, VC-1

.mpg, .mpeg, .m2p, .ps

MPEG Program Streams (MPEG-PS)

MPEG-2

.m2t, .ts, .tsv

MPEG Transport Streams (MPEG-TS)

AVC (H.264), HEVC (H.265), MPEG-2, VC-1

.dat, .m1v, .mpeg, .mpg, .mpv

MPEG-1 System Streams

MPEG-1, MPEG-2

.mp4, .mpeg4

MPEG-4

Uncompressed, DivX/Xvid, H.261, H.262, H.263, AVC (H.264), AVC Intra 50/100, HEVC (H.265), JPEG 2000, MPEG-2, MPEG-4 part 2, VC-1

.mov, .qt

QuickTime

Uncompressed, Apple ProRes (4444, 4444 XQ, 422, 422 HQ, LT, Proxy), DV/DVCPRO, DivX/Xvid, H.261, H.262, H.263, AVC (H.264), AVC Intra 50/100, HEVC (H.265), JPEG 2000 (J2K), MJPEG, MPEG-2, MPEG-4 part 2, QuickTime Animation (RLE)

.webm

WebM

VP8, VP9

Bytescale supports up to 16384x16384 inputs for most video codecs. AVC (H.264) inputs are limited to 16384x8192.

Some codec profiles are not supported by Bytescale. It is worth noting that AVC (H.264) High 4:4:4 Predictive is currently not supported. We aim to provide a full list of supported profiles in the near future.

Video Metadata API

Use the Video Metadata API to extract the duration, resolution, FPS, and other information from a video.

Instructions:

  1. Replace raw with video in your video URL.

  2. Append ?f=meta to the URL.

  3. The result will be a JSON payload describing the video's tracks (see below).

Example video metadata JSON response:

{
"tracks": [
{
"bitRate": 2500000,
"chromaSubsampling": "4:2:0",
"codec": "AVC",
"codecId": "avc1",
"colorSpace": "YUV",
"duration": 766.08,
"frameCount": 19152,
"frameRate": 25,
"height": 576,
"rotation": 0,
"scanType": "Progressive",
"type": "Video",
"width": 720
},
{
"bitRate": 159980,
"bitRateMode": "VBR",
"channels": 2,
"codec": "AAC",
"codecId": "mp4a-40-2",
"frameCount": 35875,
"frameRate": 46.875,
"samplingRate": 48000,
"title": "Stereo",
"type": "Audio"
}
]
}

Video Transcoding API

Use the Video Transcoding API to encode your videos into a specific format.

Use the f parameter to change the output format of the video:

FormatTranscodingCompressionBrowser Support

f=webm-vp8

medium

good

all (except IE11)

f=webm-vp9

slow

good

all (except IE11)

f=mp4-h264 recommended

fast

good

all

f=mp4-h265

fast

excellent

limited

f=hls-h264

fast

good

all (requires SDK)

f=hls-h265

fast

excellent

none

f=hls-h264-rt

very fast

good

all (requires SDK)

f=hls-h265-rt

very fast

excellent

none

f=webm-vp8

Transcodes the video to WebM (VP8 codec).

Caveat: WebM is slower at transcoding than MP4 and HLS.

Response: JSON for an asynchronous transcode job. The JSON will contain the URL to the WebM file on job completion.

Browser support: all browsers (except Internet Explorer 11 and below)

f=webm-vp9

Transcodes the video to WebM (VP9 codec).

Caveat: WebM is slower at transcoding than MP4 and HLS.

Response: JSON for an asynchronous transcode job. The JSON will contain the URL to the WebM file on job completion.

Browser support: all browsers (except Internet Explorer 11 and below)

f=mp4-h264

Transcodes the video to MPEG-4 (H.264/AVC codec).

Response: JSON for an asynchronous transcode job. The JSON will contain the URL to the MP4 file on job completion.

Browser support: all browsers

f=mp4-h265

Transcodes the video to MPEG-4 (H.265/HEVC codec).

Response: JSON for an asynchronous transcode job. The JSON will contain the URL to the MP4 file on job completion.

Browser support: limited (only Chrome and Edge)

f=hls-h264

Transcodes the video to HLS (H.264/AVC codec).

Response: JSON for an asynchronous transcode job. The JSON will contain the URL to the M3U8 file on job completion.

Browser support: all browsers (requires a video player SDK with HLS support, like Video.js).

f=hls-h265

Transcodes the video to HLS (H.265/HEVC codec).

Response: JSON for an asynchronous transcode job. The JSON will contain the URL to the M3U8 file on job completion.

Browser support: none

f=hls-h264-rt

Transcodes the video to HLS (H.264/AVC codec) and returns the video while it's being transcoded.

This output format is designed to reduce the wait time for your viewers when the given video has not been transcoded before. Like the other output formats, this video format incurs an initial delay while transcoding starts. However, unlike the other formats, once transcoding begins the video will be streamed to viewers during transcoding. As with the other formats, once transcoded, the resulting video will be cached and will not need to be transcoded again.

Caveat: This format introduces challenges for some video players and video SDKs due to the use of a live M3U8 playlist during transcoding. As such, we generally recommend using one of the asynchronous formats (which don't end with -rt) for a simpler implementation.

Response: M3U8

Browser support: all browsers (requires a video player SDK with HLS support, like Video.js)

f=hls-h265-rt

Transcodes the video to HLS (H.265/HEVC codec) and returns the video while it's being transcoded.

This output format is designed to reduce the wait time for your viewers when the given video has not been transcoded before. Like the other output formats, this video format incurs an initial delay while transcoding starts. However, unlike the other formats, once transcoding begins the video will be streamed to viewers during transcoding. As with the other formats, once transcoded, the resulting video will be cached and will not need to be transcoded again.

Caveat: This format introduces challenges for some video players and video SDKs due to the use of a live M3U8 playlist during transcoding. As such, we generally recommend using one of the asynchronous formats (which don't end with -rt) for a simpler implementation.

Response: M3U8

Browser support: none

f=html-h264

Returns a webpage with an embedded video player that's configured to play the requested video in H.264.

Useful for sharing links to videos and for previewing/debugging video transformation parameters.

Response: HTML

Browser support: all browsers

This is the default value.

f=meta

Returns metadata for the video (resolution, dimensions, duration, FPS, etc.).

See the Video Metadata API docs for more information.

Response: JSON (video metadata)

fps=<number>

Sets the output video's frame rate.

Supports decimal places.

mute=<bool>

Removes the audio track from the generated video.

Tip: you can set mute=true to hide the Picture in Picture (PiP) button added by FireFox for embedded videos.

Default: false

rt=auto

If this flag is present, the video variant expressed by the adjacent parameters on the querystring (e.g. q=6&rt=true&q=8&rt=auto) will be returned to the user while it's being transcoded only if the transcode rate is faster than the playback rate.

Only supported by f=hls-h264-rt, f=hls-h265-rt and f=html-h264.

This is the default value.

rt=false

If this flag is present, the video variant expressed by the adjacent parameters on the querystring (e.g. q=6&rt=true&q=8&rt=false) will never be returned to the user while it's being transcoded.

Use this option as a performance optimization (instead of using rt=auto) when you know the variant will always transcode at a slower rate than its playback rate:

When rt=auto is used, the initial HTTP request for the M3U master manifest will block until the first few segments of each rt=auto and rt=true variants have been transcoded, before returning the initial M3U playlist.

In general, you want to exclude slow-transcoding HLS variants to reduce this latency.

If none of the HLS variants have rt=true or rt=auto then the fastest variant to transcode will be returned during transcoding.

Only supported by f=hls-h264-rt, f=hls-h265-rt and f=html-h264.

rt=true

If this flag is present, the video variant expressed by the adjacent parameters on the querystring (e.g. q=6&rt=true&q=8&rt=auto) will always be returned to the user while it's being transcoded.

Only supported by f=hls-h264-rt, f=hls-h265-rt and f=html-h264.

Video Compression API

Use the Video Compression API to control the file size of your video.

q=<int>

Sets the video quality.

Only supported by h264 and h265 formats. Also requires brm=qbr (default).

For all other formats (f) and bitrate modes (brm): use bitrate (br) to adjust the video quality.

Supported values:

11: lowest quality (and lowest file size).

100: highest quality (and highest file size).

Supported values (deprecated):

1: lowest quality (and lowest file size).

10: highest quality (and highest file size).

Please note: support for the deprecated 1 to 10 range will be dropped in the future. Impacted accounts will be notified prior to this change.

Default: 80

p=s

Single-pass encoding (fastest).

This is the default value.

p=shq

Single-pass encoding (higher quality).

Only supported by h264 and h265 formats.

p=mhq

Multi-pass encoding (highest quality).

Only supported by h264 and h265 formats.

Professional pricing applies (see video pricing).

brm=qbr

Makes the output video track use a quality-defined bitrate (QBR).

The bitrate will be automatically adjusted based on the given quality score (see q).

Recommended for most cases, except where you need control over the resulting file size.

This is the default value for `h264` and `h265` formats.

brm=vbr

Makes the output video track use a variable bitrate (VBR).

More complex scenes will use a higher bitrate, whereas less complex scenes will use a lower bitrate.

This is the default value for all other formats.

brm=cbr

Makes the output video track use a constant bitrate (CBR).

br=<int>

Sets the output video bitrate (kbps):

If brm=qbr then br will be interpreted as a maximum bitrate and q will dictate the mean bitrate.

If brm=vbr then br will be interpreted as a mean bitrate.

If brm=cbr then br will be interpreted as a constant bitrate.

Accepts any value between 1 and 100000.

abr=<int>

Sets the output audio bitrate (kbps).

Supported values for f=mp4-h264, f=hls-h264, f=hls-h264-rt and f=html-h264:

16

20

24

28

32

40

48

56

64

80

96

112

128

160

192

224

256

288

320

384

448

512

576

Supported values for f=webm-vp8 and f=webm-vp9:

32

40

48

56

64

72

80

88

96

104

112

120

128

136

144

152

160

168

176

184

192

Default: 96

if=<int>

Removes noise and compression artifacts from the input video.

-1: automatic noise reduction (default)

0: disable noise reduction

1-5: enable noise reduction (1 = lowest, 5 = highest)

bo=<int>

Reduces noise and imperceptible signals, enhancing visual quality, while reducing file size.

Most effective on videos with high noise, like those shot in low light.

-1: automatic optimization

0: disable optimization (default)

1-3: enable optimization (1 = lowest, 3 = highest)

Professional pricing applies (see video pricing).

qz=<int>

Quantization reduces file size by rounding off less important details in the video.

-1: automatic quantization (default)

0: disable quantization

1-5: enable quantization (1 = lowest, 5 = highest)

gop=<int>

Sets the GOP size in frames. This is the interval between IDR-frames (frames with full picture information).

This is an advanced setting.

Manually setting this field can have the following effects:

Longer GOPs produce smaller file sizes, better quality for static scenes, but slower video seeking and random access.

Shorter GOPs produce larger file sizes, better quality for dynamic scenes, and faster video seeking and random access.

Bytescale automatically sets this value for you (by default).

This value is in frames.

rf=<int>

Sets the number of frames that can be referenced by B-frames and P-frames.

This is an advanced setting.

Manually setting this field can have the following effects:

More reference frames can increase video compression and quality.

Fewer reference frames can accelerate encoding and also reduce decoding effort on the user's device.

Supported values:

-1: automatic (default)

1-6: manual reference frame count

bf=<int>

Sets the number of B-frames between reference frames (P-frames and I-frames).

This is an advanced setting.

Manually setting this field can have the following effects:

More B-frames can increase video compression and quality.

Fewer B-frames can accelerate encoding and also reduce decoding effort on the user's device.

Supported values:

-1: automatic (default)

0-7: manual B-frame count

sd=<bool>

Inserts I-frames on scene changes. I-frames contain full frame information, so generally enhance video quality when inserted at scene changes.

This is an advanced setting.

Manually setting this field can have the following effects:

If true (default) then video quality is improved for most video types, although the video's file size may be larger.

If false then video quality may improve for certain video types while file size should also be lower.

Default: true

Video Resizing API

Use the Video Resizing API to resize videos to a different size.

w=<int>

Width to resize the video to.

wp=<int>

Width override parameter for portrait videos.

If specified, allows you to use w for landscape videos and wp for portrait videos.

If not specified, then w will be used for all videos.

h=<int>

Height to resize the video to.

hp=<int>

Height override parameter for portrait videos.

If specified, allows you to use h for landscape videos and hp for portrait videos.

If not specified, then h will be used for all videos.

sh=<int>

Sets the video sharpness to use when resizing the video:

0 is the softest.

100 is the sharpest.

Default: 50

fit=crop

Resizes the video to the given dimensions (see: w and h).

The resulting video may be cropped in one dimension to preserve the aspect ratio of the original video.

The cropped edges are determined by the crop parameter.

Resulting video size: = w x h

Aspect ratio preserved: yes

Cropping: yes

fit=enlarge

Enlarges the video to the given dimensions (see: w and h) but won't shrink videos that already exceed the dimensions.

If enlargement occurs, the video will be enlarged until at least one dimension is equal to the given dimensions, while the other dimension will be ≤ the given dimensions.

Resulting video size: ≥ w | ≥ h

Aspect ratio preserved: yes

Cropping: no

fit=enlarge-cover

Enlarges the video to the given dimensions (see: w and h) but won't shrink videos that already exceed the dimensions.

If enlargement occurs, the resulting video's dimensions will be ≥ the given dimensions.

Resulting video size: ≥ w & ≥ h

Aspect ratio preserved: yes

Cropping: no

fit=height

Resizes the video to the given height (see: h).

Width will be automatically set to preserve the aspect ratio of the original video.

Resulting video size: = h

Aspect ratio preserved: yes

Cropping: no

fit=max

Resizes the video to the given dimensions (see: w and h).

The resulting video may be smaller in one dimension, while the other will match the given dimensions exactly.

Resulting video size: (≤ w & = h) | (= w & ≤ h)

Aspect ratio preserved: yes

Cropping: no

fit=min

Resizes the video to the given dimensions (see: w and h).

The resulting video may be larger in one dimension, while the other will match the given dimensions exactly.

Resulting video size: (≥ w & = h) | (= w & ≥ h)

Aspect ratio preserved: yes

Cropping: no

fit=shrink

Shrinks the video to the given dimensions (see: w and h) but won't enlarge videos that are already below the dimensions.

If shrinking occurs, the resulting video's dimensions will be ≤ the given dimensions.

Resulting video size: ≤ w & ≤ h

Aspect ratio preserved: yes

Cropping: no

fit=shrink-cover

Shrinks the video to the given dimensions (see: w and h) but won't enlarge videos that are already below the dimensions.

If shrinking occurs, the video will be shrunk until at least one dimension is equal to the given dimensions, while the other dimension will be ≥ the given dimensions.

Resulting video size: ≤ w | ≤ h

Aspect ratio preserved: yes

Cropping: no

fit=stretch

Resizes the video to the given dimensions, stretching to fit if required (see: w and h).

Resulting video size: = w x h

Aspect ratio preserved: no

Cropping: no

fit=width

Resizes the video to the given width (see: w).

Height will be automatically set to preserve the aspect ratio of the original video.

Resulting video size: = w

Aspect ratio preserved: yes

Cropping: no

crop=bottom

Automatically crops to the bottom of the video.

The crop is performed by removing pixels evenly from the left and right of the video, or from the top of the video, but never both.

To use this parameter, you must set fit=crop or leave fit unspecified.

For manual cropping: use crop-x, crop-y, crop-w and crop-h instead of crop.

crop=bottom-left

Automatically crops to the bottom-left corner of the video.

The crop is performed by removing pixels from the top or right of the video, but never both.

To use this parameter, you must set fit=crop or leave fit unspecified.

For manual cropping: use crop-x, crop-y, crop-w and crop-h instead of crop.

crop=bottom-right

Automatically crops to the bottom-right corner of the video.

The crop is performed by removing pixels from the top or left of the video, but never both.

To use this parameter, you must set fit=crop or leave fit unspecified.

For manual cropping: use crop-x, crop-y, crop-w and crop-h instead of crop.

crop=center

Automatically crops to the center of the video.

The crop is performed by removing pixels evenly from both sides of one axis, while leaving the other axis uncropped.

To use this parameter, you must set fit=crop or leave fit unspecified.

For manual cropping: use crop-x, crop-y, crop-w and crop-h instead of crop.

crop=left

Automatically crops to the left of the video.

The crop is performed by removing pixels evenly from the top and bottom of the video, or from the right of the video, but never both.

To use this parameter, you must set fit=crop or leave fit unspecified.

For manual cropping: use crop-x, crop-y, crop-w and crop-h instead of crop.

crop=right

Automatically crops to the right of the video.

The crop is performed by removing pixels evenly from the top and bottom of the video, or from the left of the video, but never both.

To use this parameter, you must set fit=crop or leave fit unspecified.

For manual cropping: use crop-x, crop-y, crop-w and crop-h instead of crop.

crop=top

Automatically crops to the top of the video.

The crop is performed by removing pixels evenly from the left and right of the video, or from the bottom of the video, but never both.

To use this parameter, you must set fit=crop or leave fit unspecified.

For manual cropping: use crop-x, crop-y, crop-w and crop-h instead of crop.

crop=top-left

Automatically crops to the top-left corner of the video.

The crop is performed by removing pixels from the bottom or right of the video, but never both.

To use this parameter, you must set fit=crop or leave fit unspecified.

For manual cropping: use crop-x, crop-y, crop-w and crop-h instead of crop.

crop=top-right

Automatically crops to the top-right corner of the video.

The crop is performed by removing pixels from the bottom or left of the video, but never both.

To use this parameter, you must set fit=crop or leave fit unspecified.

For manual cropping: use crop-x, crop-y, crop-w and crop-h instead of crop.

Video Trimming API

Use the Video Trimming API to remove footage from the start and/or end of a video.

ts=<number>

Starts the video at the specified time in the input video, in seconds, and removes all frames before that point.

If s exceeds the length of the video, then an error will be returned.

Supports numbers between 0 - 86399 with up to two decimal places. To provide frame accuracy for video inputs, decimals will be interpreted as frame numbers, not milliseconds.

te=<number>

Ends the video at the specified time in the input video, in seconds, and removes all frames after that point.

If te exceeds the length of the video, then no error will be returned, and the parameter effectively does nothing.

If tm=after-repeat then te specifies the end position for the final clip of the repeated group (as opposed to the end position for the combined sequence of clips).

Supports numbers between 0 - 86399 with up to two decimal places. To provide frame accuracy for video inputs, decimals will be interpreted as frame numbers, not milliseconds.

tm=after-repeat

Applies the trim specified by ts and/or te after the rp parameter is applied.

tm=before-repeat

Applies the trim specified by ts and/or te before the rp parameter is applied.

This is the default value.

Video Concatenation API

Use the Video Concatenation API to append additional videos to the primary video's timeline.

rp=<int>

Number of times to play the video.

If this parameter appears after a video parameter, then it will repeat the appended video file only.

If this parameter appears before any video parameters, then it will repeat the primary video file only.

Default: 1

Video Timecode API

Use the Video Timecode API to add a burnt-in timecode (BITC) to every frame in the video.

tc=false

Disables the timecode overlay.

This is the default value.

tc=true

Enables the timecode overlay.

Professional pricing applies (see video pricing).

tcs=lg

Use an 48-point font size for the timecode overlay.

tcs=md

Use an 32-point font size for the timecode overlay.

tcs=sm

Use an 16-point font size for the timecode overlay.

tcs=xs

Use an 10-point font size for the timecode overlay.

tct=<string>

Text prefix to add to the timecode overlay.

tcp=bottom

Positions the timecode overlay to the bottom-center of the frame.

tcp=bottom-left

Positions the timecode overlay to the bottom-left of the frame.

tcp=bottom-right

Positions the timecode overlay to the bottom-right of the frame.

tcp=center

Positions the timecode overlay to the center of the frame.

tcp=left

Positions the timecode overlay to the left of the frame.

tcp=right

Positions the timecode overlay to the right of the frame.

tcp=top

Positions the timecode overlay to the top-center of the frame.

tcp=top-left

Positions the timecode overlay to the top-left of the frame.

tcp=top-right

Positions the timecode overlay to the top-right of the frame.

append=<string>

Appends the content from another media file (video or audio file) to the output.

You can specify this parameter multiple times to append multiple media files.

If you specify append multiple times, then the media files will be concatenated in the order of the querystring parameters, with the primary input video (specified on the URL's file path) appearing first.

To use: specify the "file path" attribute of another media file as the query parameter's value.

asr=<number>

Sets the output audio sample rate (kHz).

Supported values for f=mp4-h264, f=hls-h264, f=hls-h264-rt and f=html-h264:

8

12

16

22.05

24

32

44.1

48

88.2

96

Supported values for f=webm-vp8 and f=webm-vp9:

16

24

48

Note: the audio sample rate will be automatically adjusted if the provided value is unsupported by the requested audio bitrate for the requested format. For example, if you use H.264 with an audio bitrate of 96kbps, then the audio sample rate will be adjusted to be between 32kHz - 48kHz.

Default: 48

Video pricing

The Video Processing API is available on all Bytescale Plans.

Video price list

Your processing quota (see pricing) is consumed by the output video's duration multiplied by a "processing multiplier": the codec, resolution, and framerate of your output video determine the "processing multiplier" that will be used.

Videos can be played an unlimited number of times.

Your processing quota will only be deducted once per URL: for the very first request to the URL.

There is a minimum billable duration of 10 seconds per video.

Video billing example:

A 60-second video encoded to H.264 in HD at 30 FPS would consume 205.8 seconds (60 × 3.43) from your monthly processing quota.

If the video is initially played in January 2024, and is then played 100k times for the following 2 years, then you would be billed 205.8 seconds in January 2024 and 0 seconds in all the following months. (This assumes you never clear your permanent cache).

CodecResolutionFramerateProcessing Multiplier

H.264

SD

30

1.50

60

2.15

120

2.59

HD

30

3.43

60

4.30

120

5.15

4K

30

6.86

60

8.58

120

10.29

H.265

SD

30

5.49

60

6.86

120

8.23

HD

30

10.98

60

13.72

120

16.46

4K

30

21.95

60

27.43

120

32.92

VP8

SD

30

3.09

60

5.40

120

6.18

HD

30

6.18

60

10.80

120

12.35

4K

30

12.35

60

21.60

120

24.69

VP9

SD

30

3.43

60

6.00

120

6.86

HD

30

6.86

60

12.00

120

13.72

4K

30

13.72

60

24.00

120

27.43

Professional features

Bytescale offers several professional video transcoding features that carry an additional charge.

A multiplier of 1.6 is applied to the above price table if any of the following parameters are used:

  • p=mhq
  • bo
  • tc
  • tcs
  • tct
  • tcp

Video resolutions

Video resolution is measured using the output video's smallest dimension:

ResolutionMin ResolutionMax Resolution

SD

1

719

HD

720

1080

4K

1081

2160

Video resolution example:

An ultrawide 1800×710 video would be considered SD as its smallest dimension falls within the range of the SD definition above.

HLS video pricing

When using f=hls-h264, f=hls-h264-rt or f=html-h264 (which uses f=hls-h264-rt internally) your processing quota will be consumed per HLS variant.

When using f=hls-h264-rt each real-time variant (rt=true or rt=auto) will have an additional 10 seconds added to its billable duration.

The default behavior for HLS outputs is to produce one HLS H.264 variant at 30 FPS using the input video's dimensions.

You can change this behavior using the querystring parameters documented on this page.

HLS pricing example:

Given an input video of 60 seconds and the querystring ?f=hls-h264-rt&q=4&q=6&q=8&rt=false, you would be billed:

  • 3×60 seconds for 3× HLS variants (q=4&q=6&q=8).

  • 2×10 seconds for 2× HLS variants using real-time transcoding.

    • The first two variants on the querystring (q=4&q=6) do not specify rt parameters, so will default to rt=auto.

    • Per the pricing above, real-time variants incur an additional 10 seconds of billable duration.

  • 200 seconds total billed duration: 3×60 + 2×10

Was this section helpful? Yes No

Video Processing API

You are using an outdated browser.

This website requires a modern web browser -- the latest versions of these browsers are supported: