GoogleApi.ContentWarehouse.V1.Model.ImageRepositoryAmarnaCloudSpeechSignals
Table of Contents ▼
Jump to a specific part of the page:
Description
Next Tag: 10
Attributes List
This module has the following attributes (case-insensitive ascending order):
Attributes
-
duplicateOfYtS3Asr
(type:boolean()
, default:nil
)
- If this field is set to true, it means that Youtube already processed the ASR from S3 for the langID. Please find the ASR result from transcript_asr in google3/image/repository/proto/video_search.proto instead. -
langWithoutLocale
(type:String.t
, default:nil
)
- DEPRECATED: Please switch tolangid_input
. The language id input for creating this ASR without regional info. Same format as in go/ytlangid. This field is populated in Kronos Amarna Cloud Speech operator and passed to Amarna, but it is cleared before stored in Amarna's metadata table. -
langidInput
(type:GoogleApi.ContentWarehouse.V1.Model.ImageRepositoryLanguageIdentificationResult
, default:nil
)
- The language identification input used to generate this ASR. This field is populated in Kronos Amarna Cloud Speech operator and passed to Amarna, but cleared before stored in Amarna's metadata table. -
modelIdentifier
(type:String.t
, default:nil
)
- -
results
(type:list(GoogleApi.ContentWarehouse.V1.Model.ImageRepositorySpeechRecognitionResult)
, default:nil
)
- Raw results from Cloud Speech API -
s3RecognizerMetadataResponse
(type:GoogleApi.ContentWarehouse.V1.Model.ImageRepositoryS3RecognizerMetadataResponse
, default:nil
)
- The metadata about the S3 recognizer used. -
transcriptAsr
(type:GoogleApi.ContentWarehouse.V1.Model.PseudoVideoData
, default:nil
)
- This field contains full (stitched) transcription, word-level time offset , and word-level byte offset. The value of this field is derived from the SpeechRecognitionResult field above.
Type
@type t() :: %GoogleApi.ContentWarehouse.V1.Model.ImageRepositoryAmarnaCloudSpeechSignals{
duplicateOfYtS3Asr: boolean() | nil,
langWithoutLocale: String.t() | nil,
langidInput: GoogleApi.ContentWarehouse.V1.Model.ImageRepositoryLanguageIdentificationResult.t() | nil,
modelIdentifier: String.t() | nil,
results: [ GoogleApi.ContentWarehouse.V1.Model.ImageRepositorySpeechRecognitionResult.t() ] | nil,
s3RecognizerMetadataResponse: GoogleApi.ContentWarehouse.V1.Model.ImageRepositoryS3RecognizerMetadataResponse.t() | nil,
transcriptAsr: GoogleApi.ContentWarehouse.V1.Model.PseudoVideoData.t() | nil
}
duplicateOfYtS3Asr: boolean() | nil,
langWithoutLocale: String.t() | nil,
langidInput: GoogleApi.ContentWarehouse.V1.Model.ImageRepositoryLanguageIdentificationResult.t() | nil,
modelIdentifier: String.t() | nil,
results: [ GoogleApi.ContentWarehouse.V1.Model.ImageRepositorySpeechRecognitionResult.t() ] | nil,
s3RecognizerMetadataResponse: GoogleApi.ContentWarehouse.V1.Model.ImageRepositoryS3RecognizerMetadataResponse.t() | nil,
transcriptAsr: GoogleApi.ContentWarehouse.V1.Model.PseudoVideoData.t() | nil
}
Function
@spec decode(struct(), keyword()) :: struct()Data sourced from HexDocs : GoogleApi.ContentWarehouse.V1.Model.ImageRepositoryAmarnaCloudSpeechSignals