@datafire/azure_mediaservices_encoding
Client library for Azure Media Services
Installation and Usage
npm install --save @datafire/azure_mediaservices_encoding
let azure_mediaservices_encoding = require('@datafire/azure_mediaservices_encoding').create({
access_token: "",
refresh_token: "",
client_id: "",
client_secret: "",
redirect_uri: ""
});
.then(data => {
console.log(data);
});
Description
This Swagger was generated by the API Framework.
Actions
Transforms_List
Lists the Transforms in the account.
azure_mediaservices_encoding.Transforms_List({
"subscriptionId": "",
"resourceGroupName": "",
"accountName": "",
"api-version": ""
}, context)
Input
- input
object
- subscriptionId required
string
: The unique identifier for a Microsoft Azure subscription. - resourceGroupName required
string
: The name of the resource group within the Azure subscription. - accountName required
string
: The Media Services account name. - api-version required
string
: The Version of the API to be used with the client request. - $filter
string
: Restricts the set of items returned. - $orderby
string
: Specifies the key by which the result collection should be ordered.
- subscriptionId required
Output
- output TransformCollection
Transforms_Delete
Deletes a Transform.
azure_mediaservices_encoding.Transforms_Delete({
"subscriptionId": "",
"resourceGroupName": "",
"accountName": "",
"transformName": "",
"api-version": ""
}, context)
Input
- input
object
- subscriptionId required
string
: The unique identifier for a Microsoft Azure subscription. - resourceGroupName required
string
: The name of the resource group within the Azure subscription. - accountName required
string
: The Media Services account name. - transformName required
string
: The Transform name. - api-version required
string
: The Version of the API to be used with the client request.
- subscriptionId required
Output
Output schema unknown
Transforms_Get
Gets a Transform.
azure_mediaservices_encoding.Transforms_Get({
"subscriptionId": "",
"resourceGroupName": "",
"accountName": "",
"transformName": "",
"api-version": ""
}, context)
Input
- input
object
- subscriptionId required
string
: The unique identifier for a Microsoft Azure subscription. - resourceGroupName required
string
: The name of the resource group within the Azure subscription. - accountName required
string
: The Media Services account name. - transformName required
string
: The Transform name. - api-version required
string
: The Version of the API to be used with the client request.
- subscriptionId required
Output
- output Transform
Transforms_Update
Updates a Transform.
azure_mediaservices_encoding.Transforms_Update({
"subscriptionId": "",
"resourceGroupName": "",
"accountName": "",
"transformName": "",
"parameters": {},
"api-version": ""
}, context)
Input
- input
object
- subscriptionId required
string
: The unique identifier for a Microsoft Azure subscription. - resourceGroupName required
string
: The name of the resource group within the Azure subscription. - accountName required
string
: The Media Services account name. - transformName required
string
: The Transform name. - parameters required Transform
- api-version required
string
: The Version of the API to be used with the client request.
- subscriptionId required
Output
- output Transform
Transforms_CreateOrUpdate
Creates or updates a new Transform.
azure_mediaservices_encoding.Transforms_CreateOrUpdate({
"subscriptionId": "",
"resourceGroupName": "",
"accountName": "",
"transformName": "",
"parameters": {},
"api-version": ""
}, context)
Input
- input
object
- subscriptionId required
string
: The unique identifier for a Microsoft Azure subscription. - resourceGroupName required
string
: The name of the resource group within the Azure subscription. - accountName required
string
: The Media Services account name. - transformName required
string
: The Transform name. - parameters required Transform
- api-version required
string
: The Version of the API to be used with the client request.
- subscriptionId required
Output
- output Transform
Jobs_List
Lists all of the Jobs for the Transform.
azure_mediaservices_encoding.Jobs_List({
"subscriptionId": "",
"resourceGroupName": "",
"accountName": "",
"transformName": "",
"api-version": ""
}, context)
Input
- input
object
- subscriptionId required
string
: The unique identifier for a Microsoft Azure subscription. - resourceGroupName required
string
: The name of the resource group within the Azure subscription. - accountName required
string
: The Media Services account name. - transformName required
string
: The Transform name. - api-version required
string
: The Version of the API to be used with the client request. - $filter
string
: Restricts the set of items returned. - $orderby
string
: Specifies the by which the result collection should be ordered.
- subscriptionId required
Output
- output JobCollection
Jobs_Delete
Deletes a Job.
azure_mediaservices_encoding.Jobs_Delete({
"subscriptionId": "",
"resourceGroupName": "",
"accountName": "",
"transformName": "",
"jobName": "",
"api-version": ""
}, context)
Input
- input
object
- subscriptionId required
string
: The unique identifier for a Microsoft Azure subscription. - resourceGroupName required
string
: The name of the resource group within the Azure subscription. - accountName required
string
: The Media Services account name. - transformName required
string
: The Transform name. - jobName required
string
: The Job name. - api-version required
string
: The Version of the API to be used with the client request.
- subscriptionId required
Output
Output schema unknown
Jobs_Get
Gets a Job.
azure_mediaservices_encoding.Jobs_Get({
"subscriptionId": "",
"resourceGroupName": "",
"accountName": "",
"transformName": "",
"jobName": "",
"api-version": ""
}, context)
Input
- input
object
- subscriptionId required
string
: The unique identifier for a Microsoft Azure subscription. - resourceGroupName required
string
: The name of the resource group within the Azure subscription. - accountName required
string
: The Media Services account name. - transformName required
string
: The Transform name. - jobName required
string
: The Job name. - api-version required
string
: The Version of the API to be used with the client request.
- subscriptionId required
Output
- output Job
Jobs_Update
Update is only supported for description and priority. Updating Priority will take effect when the Job state is Queued or Scheduled and depending on the timing the priority update may be ignored.
azure_mediaservices_encoding.Jobs_Update({
"subscriptionId": "",
"resourceGroupName": "",
"accountName": "",
"transformName": "",
"jobName": "",
"parameters": {},
"api-version": ""
}, context)
Input
- input
object
- subscriptionId required
string
: The unique identifier for a Microsoft Azure subscription. - resourceGroupName required
string
: The name of the resource group within the Azure subscription. - accountName required
string
: The Media Services account name. - transformName required
string
: The Transform name. - jobName required
string
: The Job name. - parameters required Job
- api-version required
string
: The Version of the API to be used with the client request.
- subscriptionId required
Output
- output Job
Jobs_Create
Creates a Job.
azure_mediaservices_encoding.Jobs_Create({
"subscriptionId": "",
"resourceGroupName": "",
"accountName": "",
"transformName": "",
"jobName": "",
"parameters": {},
"api-version": ""
}, context)
Input
- input
object
- subscriptionId required
string
: The unique identifier for a Microsoft Azure subscription. - resourceGroupName required
string
: The name of the resource group within the Azure subscription. - accountName required
string
: The Media Services account name. - transformName required
string
: The Transform name. - jobName required
string
: The Job name. - parameters required Job
- api-version required
string
: The Version of the API to be used with the client request.
- subscriptionId required
Output
- output Job
Jobs_CancelJob
Cancel a Job.
azure_mediaservices_encoding.Jobs_CancelJob({
"subscriptionId": "",
"resourceGroupName": "",
"accountName": "",
"transformName": "",
"jobName": "",
"api-version": ""
}, context)
Input
- input
object
- subscriptionId required
string
: The unique identifier for a Microsoft Azure subscription. - resourceGroupName required
string
: The name of the resource group within the Azure subscription. - accountName required
string
: The Media Services account name. - transformName required
string
: The Transform name. - jobName required
string
: The Job name. - api-version required
string
: The Version of the API to be used with the client request.
- subscriptionId required
Output
Output schema unknown
Definitions
AacAudio
- AacAudio
object
: Describes Advanced Audio Codec (AAC) audio encoding settings.- profile
string
(values: AacLc, HeAacV1, HeAacV2): The encoding profile to be used when encoding audio with AAC. - bitrate
integer
: The bitrate, in bits per second, of the output encoded audio. - channels
integer
: The number of channels in the audio. - samplingRate
integer
: The sampling rate to use for encoding in hertz. - @odata.type required
string
: The discriminator for derived types. - label
string
: An optional label for the codec. The label can be used to control muxing behavior.
- profile
AbsoluteClipTime
- AbsoluteClipTime
object
: Specifies the clip time as an absolute time position in the media file. The absolute time can point to a different position depending on whether the media file starts from a timestamp of zero or not.- time required
string
: The time position on the timeline of the input media. It is usually specified as an ISO8601 period. e.g PT30S for 30 seconds. - @odata.type required
string
: The discriminator for derived types.
- time required
ApiError
- ApiError
object
: The API error.- error ODataError
Audio
- Audio
object
: Defines the common properties for all audio codecs.- bitrate
integer
: The bitrate, in bits per second, of the output encoded audio. - channels
integer
: The number of channels in the audio. - samplingRate
integer
: The sampling rate to use for encoding in hertz. - @odata.type required
string
: The discriminator for derived types. - label
string
: An optional label for the codec. The label can be used to control muxing behavior.
- bitrate
AudioAnalyzerPreset
- AudioAnalyzerPreset
object
: The Audio Analyzer preset applies a pre-defined set of AI-based analysis operations, including speech transcription. Currently, the preset supports processing of content with a single audio track.- audioLanguage
string
: The language for the audio payload in the input using the BCP-47 format of 'language tag-region' (e.g: 'en-US'). The list of supported languages are English ('en-US' and 'en-GB'), Spanish ('es-ES' and 'es-MX'), French ('fr-FR'), Italian ('it-IT'), Japanese ('ja-JP'), Portuguese ('pt-BR'), Chinese ('zh-CN'), German ('de-DE'), Arabic ('ar-EG' and 'ar-SY'), Russian ('ru-RU'), Hindi ('hi-IN'), and Korean ('ko-KR'). If you know the language of your content, it is recommended that you specify it. If the language isn't specified or set to null, automatic language detection will choose the first language detected and process with the selected language for the duration of the file. This language detection feature currently supports English, Chinese, French, German, Italian, Japanese, Spanish, Russian, and Portuguese. It does not currently support dynamically switching between languages after the first language is detected. The automatic detection works best with audio recordings with clearly discernable speech. If automatic detection fails to find the language, transcription would fallback to 'en-US'." - @odata.type required
string
: The discriminator for derived types.
- audioLanguage
AudioOverlay
- AudioOverlay
object
: Describes the properties of an audio overlay.- @odata.type required
string
: The discriminator for derived types. - audioGainLevel
number
: The gain level of audio in the overlay. The value should be in the range [0, 1.0]. The default is 1.0. - end
string
: The position in the input video at which the overlay ends. The value should be in ISO 8601 duration format. For example, PT30S to end the overlay at 30 seconds in to the input video. If not specified the overlay will be applied until the end of the input video if inputLoop is true. Else, if inputLoop is false, then overlay will last as long as the duration of the overlay media. - fadeInDuration
string
: The duration over which the overlay fades in onto the input video. The value should be in ISO 8601 duration format. If not specified the default behavior is to have no fade in (same as PT0S). - fadeOutDuration
string
: The duration over which the overlay fades out of the input video. The value should be in ISO 8601 duration format. If not specified the default behavior is to have no fade out (same as PT0S). - inputLabel required
string
: The label of the job input which is to be used as an overlay. The Input must specify exactly one file. You can specify an image file in JPG or PNG formats, or an audio file (such as a WAV, MP3, WMA or M4A file), or a video file. See https://aka.ms/mesformats for the complete list of supported audio and video file formats. - start
string
: The start position, with reference to the input video, at which the overlay starts. The value should be in ISO 8601 format. For example, PT05S to start the overlay at 5 seconds in to the input video. If not specified the overlay starts from the beginning of the input video.
- @odata.type required
BuiltInStandardEncoderPreset
- BuiltInStandardEncoderPreset
object
: Describes a built-in preset for encoding the input video with the Standard Encoder.- presetName required
string
(values: H264SingleBitrateSD, H264SingleBitrate720p, H264SingleBitrate1080p, AdaptiveStreaming, AACGoodQualityAudio, ContentAwareEncodingExperimental, H264MultipleBitrate1080p, H264MultipleBitrate720p, H264MultipleBitrateSD): The built-in preset to be used for encoding videos. - @odata.type required
string
: The discriminator for derived types.
- presetName required
ClipTime
- ClipTime
object
: Base class for specifying a clip time. Use sub classes of this class to specify the time position in the media.- @odata.type required
string
: The discriminator for derived types.
- @odata.type required
Codec
- Codec
object
: Describes the basic properties of all codecs.- @odata.type required
string
: The discriminator for derived types. - label
string
: An optional label for the codec. The label can be used to control muxing behavior.
- @odata.type required
CopyAudio
- CopyAudio
object
: A codec flag, which tells the encoder to copy the input audio bitstream.- @odata.type required
string
: The discriminator for derived types. - label
string
: An optional label for the codec. The label can be used to control muxing behavior.
- @odata.type required
CopyVideo
- CopyVideo
object
: A codec flag, which tells the encoder to copy the input video bitstream without re-encoding.- @odata.type required
string
: The discriminator for derived types. - label
string
: An optional label for the codec. The label can be used to control muxing behavior.
- @odata.type required
Deinterlace
- Deinterlace
object
: Describes the de-interlacing settings.- mode
string
(values: Off, AutoPixelAdaptive): The deinterlacing mode. Defaults to AutoPixelAdaptive. - parity
string
(values: Auto, TopFieldFirst, BottomFieldFirst): The field parity for de-interlacing, defaults to Auto.
- mode
FaceDetectorPreset
- FaceDetectorPreset
object
: Describes all the settings to be used when analyzing a video in order to detect all the faces present.- resolution
string
(values: SourceResolution, StandardDefinition): Specifies the maximum resolution at which your video is analyzed. The default behavior is "SourceResolution," which will keep the input video at its original resolution when analyzed. Using "StandardDefinition" will resize input videos to standard definition while preserving the appropriate aspect ratio. It will only resize if the video is of higher resolution. For example, a 1920x1080 input would be scaled to 640x360 before processing. Switching to "StandardDefinition" will reduce the time it takes to process high resolution video. It may also reduce the cost of using this component (see https://azure.microsoft.com/en-us/pricing/details/media-services/#analytics for details). However, faces that end up being too small in the resized video may not be detected. - @odata.type required
string
: The discriminator for derived types.
- resolution
Filters
- Filters
object
: Describes all the filtering operations, such as de-interlacing, rotation etc. that are to be applied to the input media before encoding.- crop Rectangle
- deinterlace Deinterlace
- overlays
array
: The properties of overlays to be applied to the input video. These could be audio, image or video overlays.- items Overlay
- rotation
string
(values: Auto, None, Rotate0, Rotate90, Rotate180, Rotate270): The rotation, if any, to be applied to the input video, before it is encoded. Default is Auto
Format
- Format
object
: Base class for output.- @odata.type required
string
: The discriminator for derived types. - filenamePattern required
string
: The pattern of the file names for the generated output files. The following macros are supported in the file name: {Basename} - The base name of the input video {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {Bitrate} - The audio/video bitrate. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. Any unsubstituted macros will be collapsed and removed from the filename.
- @odata.type required
H264Layer
- H264Layer
object
: Describes the settings to be used when encoding the input video into a desired output bitrate layer with the H.264 video codec.- bufferWindow
string
: The VBV buffer window length. The value should be in ISO 8601 format. The value should be in the range [0.1-100] seconds. The default is 5 seconds (for example, PT5S). - entropyMode
string
(values: Cabac, Cavlc): The entropy mode to be used for this layer. If not specified, the encoder chooses the mode that is appropriate for the profile and level. - level
string
: We currently support Level up to 6.2. The value can be Auto, or a number that matches the H.264 profile. If not specified, the default is Auto, which lets the encoder choose the Level that is appropriate for this layer. - profile
string
(values: Auto, Baseline, Main, High, High422, High444): We currently support Baseline, Main, High, High422, High444. Default is Auto. - referenceFrames
integer
: The number of reference frames to be used when encoding this layer. If not specified, the encoder determines an appropriate number based on the encoder complexity setting. - adaptiveBFrame
boolean
: Whether or not adaptive B-frames are to be used when encoding this layer. If not specified, the encoder will turn it on whenever the video profile permits its use. - bFrames
integer
: The number of B-frames to be used when encoding this layer. If not specified, the encoder chooses an appropriate number based on the video profile and level. - bitrate required
integer
: The average bitrate in bits per second at which to encode the input video when generating this layer. This is a required field. - frameRate
string
: The frame rate (in frames per second) at which to encode this layer. The value can be in the form of M/N where M and N are integers (For example, 30000/1001), or in the form of a number (For example, 30, or 29.97). The encoder enforces constraints on allowed frame rates based on the profile and level. If it is not specified, the encoder will use the same frame rate as the input video. - maxBitrate
integer
: The maximum bitrate (in bits per second), at which the VBV buffer should be assumed to refill. If not specified, defaults to the same value as bitrate. - slices
integer
: The number of slices to be used when encoding this layer. If not specified, default is zero, which means that encoder will use a single slice for each frame. - @odata.type required
string
: The discriminator for derived types. - height
string
: The height of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in height as the input. - label
string
: The alphanumeric label for this layer, which can be used in multiplexing different video and audio layers, or in naming the output file. - width
string
: The width of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in width as the input.
- bufferWindow
H264Video
- H264Video
object
: Describes all the properties for encoding a video with the H.264 codec.- complexity
string
(values: Speed, Balanced, Quality): Tells the encoder how to choose its encoding settings. The default value is Balanced. - layers
array
: The collection of output H.264 layers to be produced by the encoder.- items H264Layer
- sceneChangeDetection
boolean
: Whether or not the encoder should insert key frames at scene changes. If not specified, the default is false. This flag should be set to true only when the encoder is being configured to produce a single output video. - keyFrameInterval
string
: The distance between two key frames, thereby defining a group of pictures (GOP). The value should be a non-zero integer in the range [1, 30] seconds, specified in ISO 8601 format. The default is 2 seconds (PT2S). - stretchMode
string
(values: None, AutoSize, AutoFit): The resizing mode - how the input video will be resized to fit the desired output resolution(s). Default is AutoSize - @odata.type required
string
: The discriminator for derived types. - label
string
: An optional label for the codec. The label can be used to control muxing behavior.
- complexity
Image
- Image
object
: Describes the basic properties for generating thumbnails from the input video- range
string
: The position in the input video at which to stop generating thumbnails. The value can be in absolute timestamp (ISO 8601, e.g: PT5M30S to stop at 5 minutes and 30 seconds), or a frame count (For example, 300 to stop at the 300th frame), or a relative value (For example, 100%). - start required
string
: The position in the input video from where to start generating thumbnails. The value can be in absolute timestamp (ISO 8601, e.g: PT05S), or a frame count (For example, 10 for the 10th frame), or a relative value (For example, 1%). Also supports a macro {Best}, which tells the encoder to select the best thumbnail from the first few seconds of the video. - step
string
: The intervals at which thumbnails are generated. The value can be in absolute timestamp (ISO 8601, e.g: PT05S for one image every 5 seconds), or a frame count (For example, 30 for every 30 frames), or a relative value (For example, 1%). - keyFrameInterval
string
: The distance between two key frames, thereby defining a group of pictures (GOP). The value should be a non-zero integer in the range [1, 30] seconds, specified in ISO 8601 format. The default is 2 seconds (PT2S). - stretchMode
string
(values: None, AutoSize, AutoFit): The resizing mode - how the input video will be resized to fit the desired output resolution(s). Default is AutoSize - @odata.type required
string
: The discriminator for derived types. - label
string
: An optional label for the codec. The label can be used to control muxing behavior.
- range
ImageFormat
- ImageFormat
object
: Describes the properties for an output image file.- @odata.type required
string
: The discriminator for derived types. - filenamePattern required
string
: The pattern of the file names for the generated output files. The following macros are supported in the file name: {Basename} - The base name of the input video {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {Bitrate} - The audio/video bitrate. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. Any unsubstituted macros will be collapsed and removed from the filename.
- @odata.type required
Job
- Job
object
: A Job resource type. The progress and state can be obtained by polling a Job or subscribing to events using EventGrid.- properties JobProperties
- id
string
: Fully qualified resource ID for the resource. - name
string
: The name of the resource. - type
string
: The type of the resource.
JobCollection
- JobCollection
object
: A collection of Job items.- @odata.nextLink
string
: A link to the next page of the collection (when the collection contains too many results to return in one response). - value
array
: A collection of Job items.- items Job
- @odata.nextLink
JobError
- JobError
object
: Details of JobOutput errors.- category
string
(values: Service, Download, Upload, Configuration, Content): Helps with categorization of errors. - code
string
(values: ServiceError, ServiceTransientError, DownloadNotAccessible, DownloadTransientError, UploadNotAccessible, UploadTransientError, ConfigurationUnsupported, ContentMalformed, ContentUnsupported): Error code describing the error. - details
array
: An array of details about specific errors that led to this reported error.- items JobErrorDetail
- message
string
: A human-readable language-dependent representation of the error. - retry
string
(values: DoNotRetry, MayRetry): Indicates that it may be possible to retry the Job. If retry is unsuccessful, please contact Azure support via Azure Portal.
- category
JobErrorDetail
- JobErrorDetail
object
: Details of JobOutput errors.- code
string
: Code describing the error detail. - message
string
: A human-readable representation of the error.
- code
JobInput
- JobInput
object
: Base class for inputs to a Job.- @odata.type required
string
: The discriminator for derived types.
- @odata.type required
JobInputAsset
- JobInputAsset
object
: Represents an Asset for input into a Job.- assetName required
string
: The name of the input Asset. - end ClipTime
- files
array
: List of files. Required for JobInputHttp. Maximum of 4000 characters each.- items
string
- items
- label
string
: A label that is assigned to a JobInputClip, that is used to satisfy a reference used in the Transform. For example, a Transform can be authored so as to take an image file with the label 'xyz' and apply it as an overlay onto the input video before it is encoded. When submitting a Job, exactly one of the JobInputs should be the image file, and it should have the label 'xyz'. - start ClipTime
- @odata.type required
string
: The discriminator for derived types.
- assetName required
JobInputClip
- JobInputClip
object
: Represents input files for a Job.- end ClipTime
- files
array
: List of files. Required for JobInputHttp. Maximum of 4000 characters each.- items
string
- items
- label
string
: A label that is assigned to a JobInputClip, that is used to satisfy a reference used in the Transform. For example, a Transform can be authored so as to take an image file with the label 'xyz' and apply it as an overlay onto the input video before it is encoded. When submitting a Job, exactly one of the JobInputs should be the image file, and it should have the label 'xyz'. - start ClipTime
- @odata.type required
string
: The discriminator for derived types.
JobInputHttp
- JobInputHttp
object
: Represents HTTPS job input.- baseUri
string
: Base URI for HTTPS job input. It will be concatenated with provided file names. If no base uri is given, then the provided file list is assumed to be fully qualified uris. Maximum length of 4000 characters. - end ClipTime
- files
array
: List of files. Required for JobInputHttp. Maximum of 4000 characters each.- items
string
- items
- label
string
: A label that is assigned to a JobInputClip, that is used to satisfy a reference used in the Transform. For example, a Transform can be authored so as to take an image file with the label 'xyz' and apply it as an overlay onto the input video before it is encoded. When submitting a Job, exactly one of the JobInputs should be the image file, and it should have the label 'xyz'. - start ClipTime
- @odata.type required
string
: The discriminator for derived types.
- baseUri
JobInputs
- JobInputs
object
: Describes a list of inputs to a Job.- inputs
array
: List of inputs to a Job.- items JobInput
- @odata.type required
string
: The discriminator for derived types.
- inputs
JobOutput
- JobOutput
object
: Describes all the properties of a JobOutput.- @odata.type required
string
: The discriminator for derived types. - error JobError
- label
string
: A label that is assigned to a JobOutput in order to help uniquely identify it. This is useful when your Transform has more than one TransformOutput, whereby your Job has more than one JobOutput. In such cases, when you submit the Job, you will add two or more JobOutputs, in the same order as TransformOutputs in the Transform. Subsequently, when you retrieve the Job, either through events or on a GET request, you can use the label to easily identify the JobOutput. If a label is not provided, a default value of '{presetName}_{outputIndex}' will be used, where the preset name is the name of the preset in the corresponding TransformOutput and the output index is the relative index of the this JobOutput within the Job. Note that this index is the same as the relative index of the corresponding TransformOutput within its Transform. - progress
integer
: If the JobOutput is in a Processing state, this contains the Job completion percentage. The value is an estimate and not intended to be used to predict Job completion times. To determine if the JobOutput is complete, use the State property. - state
string
(values: Canceled, Canceling, Error, Finished, Processing, Queued, Scheduled): Describes the state of the JobOutput.
- @odata.type required
JobOutputAsset
- JobOutputAsset
object
: Represents an Asset used as a JobOutput.- assetName required
string
: The name of the output Asset. - @odata.type required
string
: The discriminator for derived types. - error JobError
- label
string
: A label that is assigned to a JobOutput in order to help uniquely identify it. This is useful when your Transform has more than one TransformOutput, whereby your Job has more than one JobOutput. In such cases, when you submit the Job, you will add two or more JobOutputs, in the same order as TransformOutputs in the Transform. Subsequently, when you retrieve the Job, either through events or on a GET request, you can use the label to easily identify the JobOutput. If a label is not provided, a default value of '{presetName}_{outputIndex}' will be used, where the preset name is the name of the preset in the corresponding TransformOutput and the output index is the relative index of the this JobOutput within the Job. Note that this index is the same as the relative index of the corresponding TransformOutput within its Transform. - progress
integer
: If the JobOutput is in a Processing state, this contains the Job completion percentage. The value is an estimate and not intended to be used to predict Job completion times. To determine if the JobOutput is complete, use the State property. - state
string
(values: Canceled, Canceling, Error, Finished, Processing, Queued, Scheduled): Describes the state of the JobOutput.
- assetName required
JobProperties
- JobProperties
object
: Properties of the Job.- correlationData
object
: Customer provided key, value pairs that will be returned in Job and JobOutput state events. - created
string
: The UTC date and time when the Job was created, in 'YYYY-MM-DDThh:mm:ssZ' format. - description
string
: Optional customer supplied description of the Job. - input required JobInput
- lastModified
string
: The UTC date and time when the Job was last updated, in 'YYYY-MM-DDThh:mm:ssZ' format. - outputs required
array
: The outputs for the Job.- items JobOutput
- priority
string
(values: Low, Normal, High): Priority with which the job should be processed. Higher priority jobs are processed before lower priority jobs. If not set, the default is normal. - state
string
(values: Canceled, Canceling, Error, Finished, Processing, Queued, Scheduled): The current state of the job.
- correlationData
JpgFormat
- JpgFormat
object
: Describes the settings for producing JPEG thumbnails.- @odata.type required
string
: The discriminator for derived types. - filenamePattern required
string
: The pattern of the file names for the generated output files. The following macros are supported in the file name: {Basename} - The base name of the input video {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {Bitrate} - The audio/video bitrate. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. Any unsubstituted macros will be collapsed and removed from the filename.
- @odata.type required
JpgImage
- JpgImage
object
: Describes the properties for producing a series of JPEG images from the input video.- layers
array
: A collection of output JPEG image layers to be produced by the encoder.- items JpgLayer
- range
string
: The position in the input video at which to stop generating thumbnails. The value can be in absolute timestamp (ISO 8601, e.g: PT5M30S to stop at 5 minutes and 30 seconds), or a frame count (For example, 300 to stop at the 300th frame), or a relative value (For example, 100%). - start required
string
: The position in the input video from where to start generating thumbnails. The value can be in absolute timestamp (ISO 8601, e.g: PT05S), or a frame count (For example, 10 for the 10th frame), or a relative value (For example, 1%). Also supports a macro {Best}, which tells the encoder to select the best thumbnail from the first few seconds of the video. - step
string
: The intervals at which thumbnails are generated. The value can be in absolute timestamp (ISO 8601, e.g: PT05S for one image every 5 seconds), or a frame count (For example, 30 for every 30 frames), or a relative value (For example, 1%). - keyFrameInterval
string
: The distance between two key frames, thereby defining a group of pictures (GOP). The value should be a non-zero integer in the range [1, 30] seconds, specified in ISO 8601 format. The default is 2 seconds (PT2S). - stretchMode
string
(values: None, AutoSize, AutoFit): The resizing mode - how the input video will be resized to fit the desired output resolution(s). Default is AutoSize - @odata.type required
string
: The discriminator for derived types. - label
string
: An optional label for the codec. The label can be used to control muxing behavior.
- layers
JpgLayer
- JpgLayer
object
: Describes the settings to produce a JPEG image from the input video.- quality
integer
: The compression quality of the JPEG output. Range is from 0-100 and the default is 70. - @odata.type required
string
: The discriminator for derived types. - height
string
: The height of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in height as the input. - label
string
: The alphanumeric label for this layer, which can be used in multiplexing different video and audio layers, or in naming the output file. - width
string
: The width of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in width as the input.
- quality
Layer
- Layer
object
: The encoder can be configured to produce video and/or images (thumbnails) at different resolutions, by specifying a layer for each desired resolution. A layer represents the properties for the video or image at a resolution.- @odata.type required
string
: The discriminator for derived types. - height
string
: The height of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in height as the input. - label
string
: The alphanumeric label for this layer, which can be used in multiplexing different video and audio layers, or in naming the output file. - width
string
: The width of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in width as the input.
- @odata.type required
Mp4Format
- Mp4Format
object
: Describes the properties for an output ISO MP4 file.- outputFiles
array
: The list of output files to produce. Each entry in the list is a set of audio and video layer labels to be muxed together .- items OutputFile
- @odata.type required
string
: The discriminator for derived types. - filenamePattern required
string
: The pattern of the file names for the generated output files. The following macros are supported in the file name: {Basename} - The base name of the input video {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {Bitrate} - The audio/video bitrate. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. Any unsubstituted macros will be collapsed and removed from the filename.
- outputFiles
MultiBitrateFormat
- MultiBitrateFormat
object
: Describes the properties for producing a collection of GOP aligned multi-bitrate files. The default behavior is to produce one output file for each video layer which is muxed together with all the audios. The exact output files produced can be controlled by specifying the outputFiles collection.- outputFiles
array
: The list of output files to produce. Each entry in the list is a set of audio and video layer labels to be muxed together .- items OutputFile
- @odata.type required
string
: The discriminator for derived types. - filenamePattern required
string
: The pattern of the file names for the generated output files. The following macros are supported in the file name: {Basename} - The base name of the input video {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {Bitrate} - The audio/video bitrate. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. Any unsubstituted macros will be collapsed and removed from the filename.
- outputFiles
ODataError
- ODataError
object
: Information about an error.- code
string
: A language-independent error name. - details
array
: The error details.- items ODataError
- message
string
: The error message. - target
string
: The target of the error (for example, the name of the property in error).
- code
OutputFile
- OutputFile
object
: Represents an output file produced.- labels required
array
: The list of labels that describe how the encoder should multiplex video and audio into an output file. For example, if the encoder is producing two video layers with labels v1 and v2, and one audio layer with label a1, then an array like '[v1, a1]' tells the encoder to produce an output file with the video track represented by v1 and the audio track represented by a1.- items
string
- items
- labels required
Overlay
- Overlay
object
: Base type for all overlays - image, audio or video.- @odata.type required
string
: The discriminator for derived types. - audioGainLevel
number
: The gain level of audio in the overlay. The value should be in the range [0, 1.0]. The default is 1.0. - end
string
: The position in the input video at which the overlay ends. The value should be in ISO 8601 duration format. For example, PT30S to end the overlay at 30 seconds in to the input video. If not specified the overlay will be applied until the end of the input video if inputLoop is true. Else, if inputLoop is false, then overlay will last as long as the duration of the overlay media. - fadeInDuration
string
: The duration over which the overlay fades in onto the input video. The value should be in ISO 8601 duration format. If not specified the default behavior is to have no fade in (same as PT0S). - fadeOutDuration
string
: The duration over which the overlay fades out of the input video. The value should be in ISO 8601 duration format. If not specified the default behavior is to have no fade out (same as PT0S). - inputLabel required
string
: The label of the job input which is to be used as an overlay. The Input must specify exactly one file. You can specify an image file in JPG or PNG formats, or an audio file (such as a WAV, MP3, WMA or M4A file), or a video file. See https://aka.ms/mesformats for the complete list of supported audio and video file formats. - start
string
: The start position, with reference to the input video, at which the overlay starts. The value should be in ISO 8601 format. For example, PT05S to start the overlay at 5 seconds in to the input video. If not specified the overlay starts from the beginning of the input video.
- @odata.type required
PngFormat
- PngFormat
object
: Describes the settings for producing PNG thumbnails.- @odata.type required
string
: The discriminator for derived types. - filenamePattern required
string
: The pattern of the file names for the generated output files. The following macros are supported in the file name: {Basename} - The base name of the input video {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {Bitrate} - The audio/video bitrate. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. Any unsubstituted macros will be collapsed and removed from the filename.
- @odata.type required
PngImage
- PngImage
object
: Describes the properties for producing a series of PNG images from the input video.- layers
array
: A collection of output PNG image layers to be produced by the encoder.- items PngLayer
- range
string
: The position in the input video at which to stop generating thumbnails. The value can be in absolute timestamp (ISO 8601, e.g: PT5M30S to stop at 5 minutes and 30 seconds), or a frame count (For example, 300 to stop at the 300th frame), or a relative value (For example, 100%). - start required
string
: The position in the input video from where to start generating thumbnails. The value can be in absolute timestamp (ISO 8601, e.g: PT05S), or a frame count (For example, 10 for the 10th frame), or a relative value (For example, 1%). Also supports a macro {Best}, which tells the encoder to select the best thumbnail from the first few seconds of the video. - step
string
: The intervals at which thumbnails are generated. The value can be in absolute timestamp (ISO 8601, e.g: PT05S for one image every 5 seconds), or a frame count (For example, 30 for every 30 frames), or a relative value (For example, 1%). - keyFrameInterval
string
: The distance between two key frames, thereby defining a group of pictures (GOP). The value should be a non-zero integer in the range [1, 30] seconds, specified in ISO 8601 format. The default is 2 seconds (PT2S). - stretchMode
string
(values: None, AutoSize, AutoFit): The resizing mode - how the input video will be resized to fit the desired output resolution(s). Default is AutoSize - @odata.type required
string
: The discriminator for derived types. - label
string
: An optional label for the codec. The label can be used to control muxing behavior.
- layers
PngLayer
- PngLayer
object
: Describes the settings to produce a PNG image from the input video.- @odata.type required
string
: The discriminator for derived types. - height
string
: The height of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in height as the input. - label
string
: The alphanumeric label for this layer, which can be used in multiplexing different video and audio layers, or in naming the output file. - width
string
: The width of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in width as the input.
- @odata.type required
Preset
- Preset
object
: Base type for all Presets, which define the recipe or instructions on how the input media files should be processed.- @odata.type required
string
: The discriminator for derived types.
- @odata.type required
Rectangle
- Rectangle
object
: Describes the properties of a rectangular window applied to the input media before processing it.- height
string
: The height of the rectangular region in pixels. This can be absolute pixel value (e.g 100), or relative to the size of the video (For example, 50%). - left
string
: The number of pixels from the left-margin. This can be absolute pixel value (e.g 100), or relative to the size of the video (For example, 50%). - top
string
: The number of pixels from the top-margin. This can be absolute pixel value (e.g 100), or relative to the size of the video (For example, 50%). - width
string
: The width of the rectangular region in pixels. This can be absolute pixel value (e.g 100), or relative to the size of the video (For example, 50%).
- height
StandardEncoderPreset
- StandardEncoderPreset
object
: Describes all the settings to be used when encoding the input video with the Standard Encoder.
Transform
- Transform
object
: A Transform encapsulates the rules or instructions for generating desired outputs from input media, such as by transcoding or by extracting insights. After the Transform is created, it can be applied to input media by creating Jobs.- properties TransformProperties
- id
string
: Fully qualified resource ID for the resource. - name
string
: The name of the resource. - type
string
: The type of the resource.
TransformCollection
- TransformCollection
object
: A collection of Transform items.- @odata.nextLink
string
: A link to the next page of the collection (when the collection contains too many results to return in one response). - value
array
: A collection of Transform items.- items Transform
- @odata.nextLink
TransformOutput
- TransformOutput
object
: Describes the properties of a TransformOutput, which are the rules to be applied while generating the desired output.- onError
string
(values: StopProcessingJob, ContinueJob): A Transform can define more than one outputs. This property defines what the service should do when one output fails - either continue to produce other outputs, or, stop the other outputs. The overall Job state will not reflect failures of outputs that are specified with 'ContinueJob'. The default is 'StopProcessingJob'. - preset required Preset
- relativePriority
string
(values: Low, Normal, High): Sets the relative priority of the TransformOutputs within a Transform. This sets the priority that the service uses for processing TransformOutputs. The default priority is Normal.
- onError
TransformProperties
- TransformProperties
object
: A Transform.- created
string
: The UTC date and time when the Transform was created, in 'YYYY-MM-DDThh:mm:ssZ' format. - description
string
: An optional verbose description of the Transform. - lastModified
string
: The UTC date and time when the Transform was last updated, in 'YYYY-MM-DDThh:mm:ssZ' format. - outputs required
array
: An array of one or more TransformOutputs that the Transform should generate.- items TransformOutput
- created
TransportStreamFormat
- TransportStreamFormat
object
: Describes the properties for generating an MPEG-2 Transport Stream (ISO/IEC 13818-1) output video file(s).- outputFiles
array
: The list of output files to produce. Each entry in the list is a set of audio and video layer labels to be muxed together .- items OutputFile
- @odata.type required
string
: The discriminator for derived types. - filenamePattern required
string
: The pattern of the file names for the generated output files. The following macros are supported in the file name: {Basename} - The base name of the input video {Extension} - The appropriate extension for this format. {Label} - The label assigned to the codec/layer. {Index} - A unique index for thumbnails. Only applicable to thumbnails. {Bitrate} - The audio/video bitrate. Not applicable to thumbnails. {Codec} - The type of the audio/video codec. Any unsubstituted macros will be collapsed and removed from the filename.
- outputFiles
Video
- Video
object
: Describes the basic properties for encoding the input video.- keyFrameInterval
string
: The distance between two key frames, thereby defining a group of pictures (GOP). The value should be a non-zero integer in the range [1, 30] seconds, specified in ISO 8601 format. The default is 2 seconds (PT2S). - stretchMode
string
(values: None, AutoSize, AutoFit): The resizing mode - how the input video will be resized to fit the desired output resolution(s). Default is AutoSize - @odata.type required
string
: The discriminator for derived types. - label
string
: An optional label for the codec. The label can be used to control muxing behavior.
- keyFrameInterval
VideoAnalyzerPreset
- VideoAnalyzerPreset
object
: A video analyzer preset that extracts insights (rich metadata) from both audio and video, and outputs a JSON format file.- insightsToExtract
string
(values: AudioInsightsOnly, VideoInsightsOnly, AllInsights): Defines the type of insights that you want the service to generate. The allowed values are 'AudioInsightsOnly', 'VideoInsightsOnly', and 'AllInsights'. The default is AllInsights. If you set this to AllInsights and the input is audio only, then only audio insights are generated. Similarly if the input is video only, then only video insights are generated. It is recommended that you not use AudioInsightsOnly if you expect some of your inputs to be video only; or use VideoInsightsOnly if you expect some of your inputs to be audio only. Your Jobs in such conditions would error out. - audioLanguage
string
: The language for the audio payload in the input using the BCP-47 format of 'language tag-region' (e.g: 'en-US'). The list of supported languages are English ('en-US' and 'en-GB'), Spanish ('es-ES' and 'es-MX'), French ('fr-FR'), Italian ('it-IT'), Japanese ('ja-JP'), Portuguese ('pt-BR'), Chinese ('zh-CN'), German ('de-DE'), Arabic ('ar-EG' and 'ar-SY'), Russian ('ru-RU'), Hindi ('hi-IN'), and Korean ('ko-KR'). If you know the language of your content, it is recommended that you specify it. If the language isn't specified or set to null, automatic language detection will choose the first language detected and process with the selected language for the duration of the file. This language detection feature currently supports English, Chinese, French, German, Italian, Japanese, Spanish, Russian, and Portuguese. It does not currently support dynamically switching between languages after the first language is detected. The automatic detection works best with audio recordings with clearly discernable speech. If automatic detection fails to find the language, transcription would fallback to 'en-US'." - @odata.type required
string
: The discriminator for derived types.
- insightsToExtract
VideoLayer
- VideoLayer
object
: Describes the settings to be used when encoding the input video into a desired output bitrate layer.- adaptiveBFrame
boolean
: Whether or not adaptive B-frames are to be used when encoding this layer. If not specified, the encoder will turn it on whenever the video profile permits its use. - bFrames
integer
: The number of B-frames to be used when encoding this layer. If not specified, the encoder chooses an appropriate number based on the video profile and level. - bitrate required
integer
: The average bitrate in bits per second at which to encode the input video when generating this layer. This is a required field. - frameRate
string
: The frame rate (in frames per second) at which to encode this layer. The value can be in the form of M/N where M and N are integers (For example, 30000/1001), or in the form of a number (For example, 30, or 29.97). The encoder enforces constraints on allowed frame rates based on the profile and level. If it is not specified, the encoder will use the same frame rate as the input video. - maxBitrate
integer
: The maximum bitrate (in bits per second), at which the VBV buffer should be assumed to refill. If not specified, defaults to the same value as bitrate. - slices
integer
: The number of slices to be used when encoding this layer. If not specified, default is zero, which means that encoder will use a single slice for each frame. - @odata.type required
string
: The discriminator for derived types. - height
string
: The height of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in height as the input. - label
string
: The alphanumeric label for this layer, which can be used in multiplexing different video and audio layers, or in naming the output file. - width
string
: The width of the output video for this layer. The value can be absolute (in pixels) or relative (in percentage). For example 50% means the output video has half as many pixels in width as the input.
- adaptiveBFrame
VideoOverlay
- VideoOverlay
object
: Describes the properties of a video overlay.- cropRectangle Rectangle
- opacity
number
: The opacity of the overlay. This is a value in the range [0 - 1.0]. Default is 1.0 which mean the overlay is opaque. - position Rectangle
- @odata.type required
string
: The discriminator for derived types. - audioGainLevel
number
: The gain level of audio in the overlay. The value should be in the range [0, 1.0]. The default is 1.0. - end
string
: The position in the input video at which the overlay ends. The value should be in ISO 8601 duration format. For example, PT30S to end the overlay at 30 seconds in to the input video. If not specified the overlay will be applied until the end of the input video if inputLoop is true. Else, if inputLoop is false, then overlay will last as long as the duration of the overlay media. - fadeInDuration
string
: The duration over which the overlay fades in onto the input video. The value should be in ISO 8601 duration format. If not specified the default behavior is to have no fade in (same as PT0S). - fadeOutDuration
string
: The duration over which the overlay fades out of the input video. The value should be in ISO 8601 duration format. If not specified the default behavior is to have no fade out (same as PT0S). - inputLabel required
string
: The label of the job input which is to be used as an overlay. The Input must specify exactly one file. You can specify an image file in JPG or PNG formats, or an audio file (such as a WAV, MP3, WMA or M4A file), or a video file. See https://aka.ms/mesformats for the complete list of supported audio and video file formats. - start
string
: The start position, with reference to the input video, at which the overlay starts. The value should be in ISO 8601 format. For example, PT05S to start the overlay at 5 seconds in to the input video. If not specified the overlay starts from the beginning of the input video.