Enum PropertyId
- java.lang.Object
-
- java.lang.Enum<PropertyId>
-
- com.microsoft.cognitiveservices.speech.PropertyId
-
- All Implemented Interfaces:
Serializable,Comparable<PropertyId>
public enum PropertyId extends Enum<PropertyId>
Defines property ids. Changed in version 1.8.0.
-
-
Enum Constant Summary
Enum Constants Enum Constant Description AudioConfig_AudioProcessingOptionsAudio processing options in JSON format.AudioConfig_DeviceNameForRenderThe device name for audio render.AudioConfig_PlaybackBufferLengthInMsPlayback buffer length in milliseconds, default is 50 milliseconds.CancellationDetails_ReasonThe cancellation reason.CancellationDetails_ReasonDetailedTextThe cancellation detailed text.CancellationDetails_ReasonTextThe cancellation text.Conversation_ApplicationIdIdentifier used to connect to the backend service.Conversation_Connection_IdAdditional identifying information, such as a Direct Line token, used to authenticate with the backend service.Conversation_Conversation_IdConversationId for the session.Conversation_Custom_Voice_Deployment_IdsComma separated list of custom voice deployment ids.Conversation_DialogTypeType of dialog backend to connect to.Conversation_From_IdFrom id to be used on speech recognition activities Added in version 1.5.0.Conversation_Initial_Silence_TimeoutSilence timeout for listening Added in version 1.5.0.Conversation_Request_Bot_Status_MessagesA boolean value that specifies whether the client should receive status messages and generate corresponding turnStatusReceived events.Conversation_Speech_Activity_TemplateSpeech activity template, stamp properties in the template on the activity generated by the service for speech.DataBuffer_TimeStampThe time stamp associated to data buffer written by client when using Pull/Push audio mode streams.DataBuffer_UserIdThe user id associated to data buffer written by client when using Pull/Push audio mode streams.LanguageUnderstandingServiceResponse_JsonResultThe Language Understanding Service response output (in JSON format).PronunciationAssessment_ContentTopicThe content type of the pronunciation assessment.PronunciationAssessment_EnableMiscueDefines if enable miscue calculation.PronunciationAssessment_EnableProsodyAssessmentWhether to enable prosody assessment.PronunciationAssessment_GradingSystemThe point system for pronunciation score calibration (FivePoint or HundredMark).PronunciationAssessment_GranularityThe pronunciation evaluation granularity (Phoneme, Word, or FullText).PronunciationAssessment_JsonThe json string of pronunciation assessment parameters Under normal circumstances, you shouldn't have to use this property directly.PronunciationAssessment_NBestPhonemeCountThe pronunciation evaluation nbest phoneme count.PronunciationAssessment_ParamsPronunciation assessment parameters.PronunciationAssessment_PhonemeAlphabetThe pronunciation evaluation phoneme alphabet.PronunciationAssessment_ReferenceTextThe reference text of the audio for pronunciation evaluation.SpeakerRecognition_Api_VersionVersion of Speaker Recognition to use.Speech_LogFilenameThe file name to write logs.Speech_SegmentationSilenceTimeoutMsA duration of detected silence, measured in milliseconds, after which speech-to-text will determine a spoken phrase has ended and generate a final Recognized result.Speech_SessionIdThe session id.SpeechServiceAuthorization_TokenThe Cognitive Services Speech Service authorization token (aka access token).SpeechServiceAuthorization_TypeThe Cognitive Services Speech Service authorization type.SpeechServiceConnection_AutoDetectSourceLanguageResultThe auto detect source language result Added in version 1.8.0.SpeechServiceConnection_AutoDetectSourceLanguagesThe auto detect source languages Added in version 1.8.0.SpeechServiceConnection_EnableAudioLoggingA boolean value specifying whether audio logging is enabled in the service or not.SpeechServiceConnection_EndpointThe Cognitive Services Speech Service endpoint (url).SpeechServiceConnection_EndpointIdThe Cognitive Services Custom Speech or Custom Voice Service endpoint id.SpeechServiceConnection_EndSilenceTimeoutMsThe end silence timeout value (in milliseconds) used by the service.SpeechServiceConnection_HostThe Cognitive Services Speech Service host (url).SpeechServiceConnection_InitialSilenceTimeoutMsThe initial silence timeout value (in milliseconds) used by the service.SpeechServiceConnection_IntentRegionThe Language Understanding Service region.SpeechServiceConnection_KeyThe Cognitive Services Speech Service subscription key.SpeechServiceConnection_LanguageIdModeThe speech service connection language identifier mode.SpeechServiceConnection_ProxyHostNameThe host name of the proxy server used to connect to the Cognitive Services Speech Service.SpeechServiceConnection_ProxyPasswordThe password of the proxy server used to connect to the Cognitive Services Speech Service.SpeechServiceConnection_ProxyPortThe port of the proxy server used to connect to the Cognitive Services Speech Service.SpeechServiceConnection_ProxyUserNameThe user name of the proxy server used to connect to the Cognitive Services Speech Service.SpeechServiceConnection_RecoBackendThe string to specify the backend to be used for speech recognition; allowed options are online and offline.SpeechServiceConnection_RecoLanguageThe spoken language to be recognized (in BCP-47 format).SpeechServiceConnection_RecoModeThe Cognitive Services Speech Service recognition mode.SpeechServiceConnection_RecoModelKeyThe decryption key of the model to be used for speech recognition.SpeechServiceConnection_RecoModelNameThe name of the model to be used for speech recognition.SpeechServiceConnection_RegionThe Cognitive Services Speech Service region.SpeechServiceConnection_SynthBackendThe string to specify TTS backend; valid options are online and offline.SpeechServiceConnection_SynthEnableCompressedAudioTransmissionIndicates if use compressed audio format for speech synthesis audio transmission.SpeechServiceConnection_SynthLanguageThe spoken language to be synthesized (e.g.SpeechServiceConnection_SynthModelKeyThe decryption key of the model to be used for speech synthesis.SpeechServiceConnection_SynthOfflineDataPathThe data file path(s) for offline synthesis engine; only valid when synthesis backend is offline.SpeechServiceConnection_SynthOfflineVoiceThe name of the offline TTS voice to be used for speech synthesis.SpeechServiceConnection_SynthOutputFormatThe string to specify TTS output audio format (e.g.SpeechServiceConnection_SynthVoiceThe name of the TTS voice to be used for speech synthesis Added in version 1.7.0SpeechServiceConnection_TranslationFeaturesTranslation features.SpeechServiceConnection_TranslationToLanguagesThe list of comma separated languages (BCP-47 format) used as target translation languages.SpeechServiceConnection_TranslationVoiceThe name of the Cognitive Service Text to Speech Service voice.SpeechServiceConnection_UrlThe URL string built from speech configuration.SpeechServiceConnection_VoicesListEndpointThe Cognitive Services Speech Service voices list api endpoint (url).SpeechServiceResponse_JsonErrorDetailsThe Cognitive Services Speech Service error details (in JSON format).SpeechServiceResponse_JsonResultThe Cognitive Services Speech Service response output (in JSON format).SpeechServiceResponse_OutputFormatOptionA string value specifying the output format option in the response result.SpeechServiceResponse_PostProcessingOptionA string value specifying which post processing option should be used by service.SpeechServiceResponse_ProfanityOptionThe requested Cognitive Services Speech Service response output profanity setting.SpeechServiceResponse_RecognitionBackendThe recognition backend.SpeechServiceResponse_RecognitionLatencyMsThe recognition latency in milliseconds.SpeechServiceResponse_RequestDetailedResultTrueFalseThe requested Cognitive Services Speech Service response output format (simple or detailed).SpeechServiceResponse_RequestProfanityFilterTrueFalseThe requested Cognitive Services Speech Service response output profanity level.SpeechServiceResponse_RequestPunctuationBoundaryA boolean value specifying whether to request punctuation boundary in WordBoundary Events.SpeechServiceResponse_RequestSentenceBoundaryA boolean value specifying whether to request sentence boundary in WordBoundary Events.SpeechServiceResponse_RequestSnrA boolean value specifying whether to include SNR (signal to noise ratio) in the response result.SpeechServiceResponse_RequestWordBoundaryA boolean value specifying whether to request WordBoundary events.SpeechServiceResponse_RequestWordLevelTimestampsA boolean value specifying whether to include word-level timestamps in the response result.SpeechServiceResponse_StablePartialResultThresholdThe number of times a word has to be in partial results to be returned.SpeechServiceResponse_SynthesisBackendIndicates which backend the synthesis is finished by.SpeechServiceResponse_SynthesisConnectionLatencyMsThe speech synthesis connection latency in milliseconds.SpeechServiceResponse_SynthesisEventsSyncToAudioA boolean value specifying whether the SDK should synchronize synthesis metadata events, (e.g.SpeechServiceResponse_SynthesisFinishLatencyMsThe speech synthesis all bytes latency in milliseconds.SpeechServiceResponse_SynthesisFirstByteLatencyMsThe speech synthesis first byte latency in milliseconds.SpeechServiceResponse_SynthesisNetworkLatencyMsThe speech synthesis network latency in milliseconds.SpeechServiceResponse_SynthesisServiceLatencyMsThe speech synthesis service latency in milliseconds.SpeechServiceResponse_SynthesisUnderrunTimeMsThe underrun time for speech synthesis in milliseconds.SpeechServiceResponse_TranslationRequestStablePartialResultA boolean value to request for stabilizing translation partial results by omitting words in the end.SpeechTranslation_ModelKeyThe decryption key of a model to be used for speech translation.SpeechTranslation_ModelNameThe name of a model to be used for speech translation.
-
Method Summary
All Methods Static Methods Instance Methods Concrete Methods Modifier and Type Method Description intgetValue()Returns the internal value property idstatic PropertyIdvalueOf(String name)Returns the enum constant of this type with the specified name.static PropertyId[]values()Returns an array containing the constants of this enum type, in the order they are declared.
-
-
-
Enum Constant Detail
-
SpeechServiceConnection_Key
public static final PropertyId SpeechServiceConnection_Key
The Cognitive Services Speech Service subscription key. If you are using an intent recognizer, you need to specify the LUIS endpoint key for your particular LUIS app. Under normal circumstances, you shouldn't have to use this property directly. Instead, useSpeechConfig.fromSubscription(java.lang.String, java.lang.String)
-
SpeechServiceConnection_Endpoint
public static final PropertyId SpeechServiceConnection_Endpoint
The Cognitive Services Speech Service endpoint (url). Under normal circumstances, you shouldn't have to use this property directly. Instead, useSpeechConfig.fromEndpoint(java.net.URI, java.lang.String)- See Also:
- "NOTE: This endpoint is not the same as the endpoint used to obtain an access token."
-
SpeechServiceConnection_Region
public static final PropertyId SpeechServiceConnection_Region
The Cognitive Services Speech Service region. Under normal circumstances, you shouldn't have to use this property directly. Instead, useSpeechConfig.fromSubscription(java.lang.String, java.lang.String),SpeechConfig.fromEndpoint(java.net.URI, java.lang.String),SpeechConfig.fromHost(java.net.URI, java.lang.String),SpeechConfig.fromAuthorizationToken(java.lang.String, java.lang.String).
-
SpeechServiceAuthorization_Token
public static final PropertyId SpeechServiceAuthorization_Token
The Cognitive Services Speech Service authorization token (aka access token). Under normal circumstances, you shouldn't have to use this property directly.Instead, useSpeechConfig.fromAuthorizationToken(java.lang.String, java.lang.String),IntentRecognizer.setAuthorizationToken(String token),SpeechRecognizer.setAuthorizationToken(java.lang.String),TranslationRecognizer.setAuthorizationToken(java.lang.String).
-
SpeechServiceAuthorization_Type
public static final PropertyId SpeechServiceAuthorization_Type
The Cognitive Services Speech Service authorization type. Currently unused.
-
SpeechServiceConnection_EndpointId
public static final PropertyId SpeechServiceConnection_EndpointId
The Cognitive Services Custom Speech or Custom Voice Service endpoint id. Under normal circumstances, you shouldn't have to use this property directly. Instead useSpeechConfig.setEndpointId(java.lang.String). NOTE: The endpoint id is available in the Custom Speech Portal, listed under Endpoint Details.
-
SpeechServiceConnection_Host
public static final PropertyId SpeechServiceConnection_Host
The Cognitive Services Speech Service host (url). Under normal circumstances, you shouldn't have to use this property directly.Instead, useSpeechConfig.fromHost(java.net.URI, java.lang.String).
-
SpeechServiceConnection_ProxyHostName
public static final PropertyId SpeechServiceConnection_ProxyHostName
The host name of the proxy server used to connect to the Cognitive Services Speech Service. Under normal circumstances, you shouldn't have to use this property directly.Instead, useSpeechConfig.setProxy(java.lang.String, int, java.lang.String, java.lang.String). NOTE: This property id was added in version 1.1.0.
-
SpeechServiceConnection_ProxyPort
public static final PropertyId SpeechServiceConnection_ProxyPort
The port of the proxy server used to connect to the Cognitive Services Speech Service. Under normal circumstances, you shouldn't have to use this property directly.Instead, useSpeechConfig.setProxy(java.lang.String, int, java.lang.String, java.lang.String). NOTE: This property id was added in version 1.1.0.
-
SpeechServiceConnection_ProxyUserName
public static final PropertyId SpeechServiceConnection_ProxyUserName
The user name of the proxy server used to connect to the Cognitive Services Speech Service. Under normal circumstances, you shouldn't have to use this property directly.Instead, useSpeechConfig.setProxy(java.lang.String, int, java.lang.String, java.lang.String). NOTE: This property id was added in version 1.1.0.
-
SpeechServiceConnection_ProxyPassword
public static final PropertyId SpeechServiceConnection_ProxyPassword
The password of the proxy server used to connect to the Cognitive Services Speech Service. Under normal circumstances, you shouldn't have to use this property directly.Instead, useSpeechConfig.setProxy(java.lang.String, int, java.lang.String, java.lang.String). NOTE: This property id was added in version 1.1.0.
-
SpeechServiceConnection_Url
public static final PropertyId SpeechServiceConnection_Url
The URL string built from speech configuration. This property is intended to be read-only. The SDK is using it internally. NOTE: Added in version 1.5.0.
-
SpeechServiceConnection_TranslationToLanguages
public static final PropertyId SpeechServiceConnection_TranslationToLanguages
The list of comma separated languages (BCP-47 format) used as target translation languages. Under normal circumstances, you shouldn't have to use this property directly. Instead, useSpeechTranslationConfig.addTargetLanguage(java.lang.String),SpeechTranslationConfig.getTargetLanguages(),TranslationRecognizer.getTargetLanguages().
-
SpeechServiceConnection_TranslationVoice
public static final PropertyId SpeechServiceConnection_TranslationVoice
The name of the Cognitive Service Text to Speech Service voice. Under normal circumstances, you shouldn't have to use this property directly. Instead useSpeechTranslationConfig.setVoiceName(java.lang.String). NOTE: Valid voice names can be found here.
-
SpeechServiceConnection_TranslationFeatures
public static final PropertyId SpeechServiceConnection_TranslationFeatures
Translation features. For internal use.
-
SpeechServiceConnection_IntentRegion
public static final PropertyId SpeechServiceConnection_IntentRegion
The Language Understanding Service region. Under normal circumstances, you shouldn't have to use this property directly. Instead, useLanguageUnderstandingModel.
-
SpeechServiceConnection_RecoMode
public static final PropertyId SpeechServiceConnection_RecoMode
The Cognitive Services Speech Service recognition mode. Can be "INTERACTIVE", "CONVERSATION", "DICTATION". This property is intended to be read-only. The SDK is using it internally.
-
SpeechServiceConnection_RecoLanguage
public static final PropertyId SpeechServiceConnection_RecoLanguage
The spoken language to be recognized (in BCP-47 format). Under normal circumstances, you shouldn't have to use this property directly. Instead, useSpeechConfig.setSpeechRecognitionLanguage(java.lang.String).
-
Speech_SessionId
public static final PropertyId Speech_SessionId
The session id. This id is a universally unique identifier (aka UUID) representing a specific binding of an audio input stream and the underlying speech recognition instance to which its bound. Under normal circumstances, you shouldn't have to use this property directly. Instead useSessionEventArgs.getSessionId().
-
SpeechServiceConnection_RecoBackend
public static final PropertyId SpeechServiceConnection_RecoBackend
The string to specify the backend to be used for speech recognition; allowed options are online and offline. Under normal circumstances, you shouldn't use this property directly. Currently the offline option is only valid when EmbeddedSpeechConfig is used. Added in version 1.19.0
-
SpeechServiceConnection_RecoModelName
public static final PropertyId SpeechServiceConnection_RecoModelName
The name of the model to be used for speech recognition. Under normal circumstances, you shouldn't use this property directly. Currently this is only valid when EmbeddedSpeechConfig is used. Added in version 1.19.0
-
SpeechServiceConnection_RecoModelKey
public static final PropertyId SpeechServiceConnection_RecoModelKey
The decryption key of the model to be used for speech recognition. Under normal circumstances, you shouldn't use this property directly. Currently this is only valid when EmbeddedSpeechConfig is used. Added in version 1.19.0
-
SpeechServiceConnection_SynthLanguage
public static final PropertyId SpeechServiceConnection_SynthLanguage
The spoken language to be synthesized (e.g. en-US) Added in version 1.7.0
-
SpeechServiceConnection_SynthVoice
public static final PropertyId SpeechServiceConnection_SynthVoice
The name of the TTS voice to be used for speech synthesis Added in version 1.7.0
-
SpeechServiceConnection_SynthOutputFormat
public static final PropertyId SpeechServiceConnection_SynthOutputFormat
The string to specify TTS output audio format (e.g. riff-16khz-16bit-mono-pcm) Added in version 1.7.0
-
SpeechServiceConnection_SynthEnableCompressedAudioTransmission
public static final PropertyId SpeechServiceConnection_SynthEnableCompressedAudioTransmission
Indicates if use compressed audio format for speech synthesis audio transmission. This property only affects when SpeechServiceConnection_SynthOutputFormat is set to a pcm format. If this property is not set and GStreamer is available, SDK will use compressed format for synthesized audio transmission, and decode it. You can set this property to "false" to use raw pcm format for transmission on wire. Added in version 1.16.0
-
SpeechServiceConnection_SynthBackend
public static final PropertyId SpeechServiceConnection_SynthBackend
The string to specify TTS backend; valid options are online and offline. Under normal circumstances, you shouldn't have to use this property directly. Instead, useEmbeddedSpeechConfig.fromPath(java.lang.String)orEmbeddedSpeechConfig.fromPaths(java.util.List<java.lang.String>)to set the synthesis backend to offline. Added in version 1.19.0
-
SpeechServiceConnection_SynthOfflineDataPath
public static final PropertyId SpeechServiceConnection_SynthOfflineDataPath
The data file path(s) for offline synthesis engine; only valid when synthesis backend is offline. Under normal circumstances, you shouldn't have to use this property directly. Instead, useEmbeddedSpeechConfig.fromPath(java.lang.String)orEmbeddedSpeechConfig.fromPaths(java.util.List<java.lang.String>). Added in version 1.19.0
-
SpeechServiceConnection_SynthOfflineVoice
public static final PropertyId SpeechServiceConnection_SynthOfflineVoice
The name of the offline TTS voice to be used for speech synthesis. Under normal circumstances, you shouldn't use this property directly. Instead, useEmbeddedSpeechConfig.setSpeechSynthesisVoice(java.lang.String, java.lang.String). Added in version 1.19.0
-
SpeechServiceConnection_SynthModelKey
public static final PropertyId SpeechServiceConnection_SynthModelKey
The decryption key of the model to be used for speech synthesis. Under normal circumstances, you shouldn't use this property directly. Instead, useEmbeddedSpeechConfig.setSpeechSynthesisVoice(java.lang.String, java.lang.String). Added in version 1.19.0
-
SpeechServiceConnection_VoicesListEndpoint
public static final PropertyId SpeechServiceConnection_VoicesListEndpoint
The Cognitive Services Speech Service voices list api endpoint (url). Under normal circumstances, you don't need to specify this property, SDK will construct it based on the region/host/endpoint ofSpeechConfig. Added in version 1.16.0
-
SpeechServiceConnection_InitialSilenceTimeoutMs
public static final PropertyId SpeechServiceConnection_InitialSilenceTimeoutMs
The initial silence timeout value (in milliseconds) used by the service. Added in version 1.5.0
-
SpeechServiceConnection_EndSilenceTimeoutMs
public static final PropertyId SpeechServiceConnection_EndSilenceTimeoutMs
The end silence timeout value (in milliseconds) used by the service. Added in version 1.5.0
-
SpeechServiceConnection_EnableAudioLogging
public static final PropertyId SpeechServiceConnection_EnableAudioLogging
A boolean value specifying whether audio logging is enabled in the service or not. Audio and content logs are stored either in Microsoft-owned storage, or in your own storage account linked to your Cognitive Services subscription (Bring Your Own Storage (BYOS) enabled Speech resource). Added in version 1.5.0
-
SpeechServiceConnection_LanguageIdMode
public static final PropertyId SpeechServiceConnection_LanguageIdMode
The speech service connection language identifier mode. Can be "AtStart" (the default), or "Continuous". See Language Identification document. Added in 1.25.0
-
SpeechServiceResponse_RequestDetailedResultTrueFalse
public static final PropertyId SpeechServiceResponse_RequestDetailedResultTrueFalse
The requested Cognitive Services Speech Service response output format (simple or detailed). Under normal circumstances, you shouldn't have to use this property directly. Instead useSpeechConfig.setOutputFormat(com.microsoft.cognitiveservices.speech.OutputFormat).
-
SpeechServiceResponse_RequestProfanityFilterTrueFalse
public static final PropertyId SpeechServiceResponse_RequestProfanityFilterTrueFalse
The requested Cognitive Services Speech Service response output profanity level. Currently unused.
-
SpeechServiceResponse_ProfanityOption
public static final PropertyId SpeechServiceResponse_ProfanityOption
The requested Cognitive Services Speech Service response output profanity setting. Allowed values are "masked", "removed", and "raw". Added in version 1.5.0.
-
SpeechServiceResponse_PostProcessingOption
public static final PropertyId SpeechServiceResponse_PostProcessingOption
A string value specifying which post processing option should be used by service. Allowed values are "TrueText". Added in version 1.5.0
-
SpeechServiceResponse_RequestWordLevelTimestamps
public static final PropertyId SpeechServiceResponse_RequestWordLevelTimestamps
A boolean value specifying whether to include word-level timestamps in the response result. Added in version 1.5.0
-
SpeechServiceResponse_StablePartialResultThreshold
public static final PropertyId SpeechServiceResponse_StablePartialResultThreshold
The number of times a word has to be in partial results to be returned. Added in version 1.5.0
-
SpeechServiceResponse_OutputFormatOption
public static final PropertyId SpeechServiceResponse_OutputFormatOption
A string value specifying the output format option in the response result. Internal use only. Added in version 1.5.0.
-
SpeechServiceResponse_RequestSnr
public static final PropertyId SpeechServiceResponse_RequestSnr
A boolean value specifying whether to include SNR (signal to noise ratio) in the response result. Added in version 1.18.0
-
SpeechServiceResponse_TranslationRequestStablePartialResult
public static final PropertyId SpeechServiceResponse_TranslationRequestStablePartialResult
A boolean value to request for stabilizing translation partial results by omitting words in the end. Added in version 1.5.0.
-
SpeechServiceResponse_RequestWordBoundary
public static final PropertyId SpeechServiceResponse_RequestWordBoundary
A boolean value specifying whether to request WordBoundary events. Added in version 1.21.0.
-
SpeechServiceResponse_RequestPunctuationBoundary
public static final PropertyId SpeechServiceResponse_RequestPunctuationBoundary
A boolean value specifying whether to request punctuation boundary in WordBoundary Events. Default is true. Added in version 1.21.0.
-
SpeechServiceResponse_RequestSentenceBoundary
public static final PropertyId SpeechServiceResponse_RequestSentenceBoundary
A boolean value specifying whether to request sentence boundary in WordBoundary Events. Default is false. Added in version 1.21.0.
-
SpeechServiceResponse_SynthesisEventsSyncToAudio
public static final PropertyId SpeechServiceResponse_SynthesisEventsSyncToAudio
A boolean value specifying whether the SDK should synchronize synthesis metadata events, (e.g. word boundary, viseme, etc.) to the audio playback. This only takes effect when the audio is played through the SDK. Default is true. If set to false, the SDK will fire the events as they come from the service, which may be out of sync with the audio playback. Added in version 1.31.0.
-
SpeechServiceResponse_JsonResult
public static final PropertyId SpeechServiceResponse_JsonResult
The Cognitive Services Speech Service response output (in JSON format). This property is available on recognition result objects only.
-
SpeechServiceResponse_JsonErrorDetails
public static final PropertyId SpeechServiceResponse_JsonErrorDetails
The Cognitive Services Speech Service error details (in JSON format). Under normal circumstances, you shouldn't have to use this property directly. Instead, useCancellationDetails.getErrorDetails().
-
SpeechServiceResponse_RecognitionLatencyMs
public static final PropertyId SpeechServiceResponse_RecognitionLatencyMs
The recognition latency in milliseconds. Read-only, available on final speech/translation/intent results. This measures the latency between when an audio input is received by the SDK, and the moment the final result is received from the service. The SDK computes the time difference between the last audio fragment from the audio input that is contributing to the final result, and the time the final result is received from the speech service. Added in version 1.3.0.
-
SpeechServiceResponse_RecognitionBackend
public static final PropertyId SpeechServiceResponse_RecognitionBackend
The recognition backend. Read-only, available on speech recognition results. This indicates whether cloud (online) or embedded (offline) recognition was used to produce the result.
-
SpeechServiceResponse_SynthesisFirstByteLatencyMs
public static final PropertyId SpeechServiceResponse_SynthesisFirstByteLatencyMs
The speech synthesis first byte latency in milliseconds. Read-only, available on final speech synthesis results. This measures the latency between when the synthesis is started to be processed, and the moment the first byte audio is available. Added in version 1.17.0.
-
SpeechServiceResponse_SynthesisFinishLatencyMs
public static final PropertyId SpeechServiceResponse_SynthesisFinishLatencyMs
The speech synthesis all bytes latency in milliseconds. Read-only, available on final speech synthesis results. This measures the latency between when the synthesis is started to be processed, and the moment the whole audio is synthesized. Added in version 1.17.0.
-
SpeechServiceResponse_SynthesisUnderrunTimeMs
public static final PropertyId SpeechServiceResponse_SynthesisUnderrunTimeMs
The underrun time for speech synthesis in milliseconds. Read-only, available on results in SynthesisCompleted events. This measures the total underrun time fromAudioConfig_PlaybackBufferLengthInMsis filled to synthesis completed. Added in version 1.17.0.
-
SpeechServiceResponse_SynthesisConnectionLatencyMs
public static final PropertyId SpeechServiceResponse_SynthesisConnectionLatencyMs
The speech synthesis connection latency in milliseconds. Read-only, available on final speech synthesis results. This measures the latency between when the synthesis is started to be processed, and the moment the HTTP/WebSocket connection is established. Added in version 1.26.0
-
SpeechServiceResponse_SynthesisNetworkLatencyMs
public static final PropertyId SpeechServiceResponse_SynthesisNetworkLatencyMs
The speech synthesis network latency in milliseconds. Read-only, available on final speech synthesis results. This measures the network round trip time. Added in version 1.26.0
-
SpeechServiceResponse_SynthesisServiceLatencyMs
public static final PropertyId SpeechServiceResponse_SynthesisServiceLatencyMs
The speech synthesis service latency in milliseconds. Read-only, available on final speech synthesis results. This measures the service processing time to synthesize the first byte of audio. Added in version 1.26.0
-
SpeechServiceResponse_SynthesisBackend
public static final PropertyId SpeechServiceResponse_SynthesisBackend
Indicates which backend the synthesis is finished by. Read-only, available on speech synthesis results, except for the result in SynthesisStarted event. Added in version 1.17.0.
-
CancellationDetails_Reason
public static final PropertyId CancellationDetails_Reason
The cancellation reason. Currently unused.
-
CancellationDetails_ReasonText
public static final PropertyId CancellationDetails_ReasonText
The cancellation text. Currently unused.
-
CancellationDetails_ReasonDetailedText
public static final PropertyId CancellationDetails_ReasonDetailedText
The cancellation detailed text. Currently unused.
-
LanguageUnderstandingServiceResponse_JsonResult
public static final PropertyId LanguageUnderstandingServiceResponse_JsonResult
The Language Understanding Service response output (in JSON format). Available viaIntentRecognitionResult.toString().
-
AudioConfig_DeviceNameForRender
public static final PropertyId AudioConfig_DeviceNameForRender
The device name for audio render. Under normal circumstances, you shouldn't have to use this property directly. Instead, useAudioConfig.fromSpeakerOutput(java.lang.String). Added in version 1.17.0
-
AudioConfig_PlaybackBufferLengthInMs
public static final PropertyId AudioConfig_PlaybackBufferLengthInMs
Playback buffer length in milliseconds, default is 50 milliseconds.
-
AudioConfig_AudioProcessingOptions
public static final PropertyId AudioConfig_AudioProcessingOptions
Audio processing options in JSON format.
-
Speech_LogFilename
public static final PropertyId Speech_LogFilename
The file name to write logs. Added in version 1.4.0.
-
Speech_SegmentationSilenceTimeoutMs
public static final PropertyId Speech_SegmentationSilenceTimeoutMs
A duration of detected silence, measured in milliseconds, after which speech-to-text will determine a spoken phrase has ended and generate a final Recognized result. Configuring this timeout may be helpful in situations where spoken input is significantly faster or slower than usual and default segmentation behavior consistently yields results that are too long or too short. Segmentation timeout values that are inappropriately high or low can negatively affect speech-to-text accuracy; this property should be carefully configured and the resulting behavior should be thoroughly validated as intended. For more information about timeout configuration that includes discussion of default behaviors, please visit https://aka.ms/csspeech/timeouts.
-
Conversation_ApplicationId
public static final PropertyId Conversation_ApplicationId
Identifier used to connect to the backend service. Added in version 1.5.0.
-
Conversation_DialogType
public static final PropertyId Conversation_DialogType
Type of dialog backend to connect to. Added in version 1.7.0.
-
Conversation_Initial_Silence_Timeout
public static final PropertyId Conversation_Initial_Silence_Timeout
Silence timeout for listening Added in version 1.5.0.
-
Conversation_From_Id
public static final PropertyId Conversation_From_Id
From id to be used on speech recognition activities Added in version 1.5.0.
-
Conversation_Conversation_Id
public static final PropertyId Conversation_Conversation_Id
ConversationId for the session. Added in version 1.8.0.
-
Conversation_Custom_Voice_Deployment_Ids
public static final PropertyId Conversation_Custom_Voice_Deployment_Ids
Comma separated list of custom voice deployment ids. Added in version 1.8.0.
-
Conversation_Speech_Activity_Template
public static final PropertyId Conversation_Speech_Activity_Template
Speech activity template, stamp properties in the template on the activity generated by the service for speech. Added in version 1.10.0.
-
Conversation_Request_Bot_Status_Messages
public static final PropertyId Conversation_Request_Bot_Status_Messages
A boolean value that specifies whether the client should receive status messages and generate corresponding turnStatusReceived events. Defaults to true. Added in version 1.15.0.
-
Conversation_Connection_Id
public static final PropertyId Conversation_Connection_Id
Additional identifying information, such as a Direct Line token, used to authenticate with the backend service. Added in version 1.16.0.
-
SpeechServiceConnection_AutoDetectSourceLanguages
public static final PropertyId SpeechServiceConnection_AutoDetectSourceLanguages
The auto detect source languages Added in version 1.8.0.
-
SpeechServiceConnection_AutoDetectSourceLanguageResult
public static final PropertyId SpeechServiceConnection_AutoDetectSourceLanguageResult
The auto detect source language result Added in version 1.8.0.
-
DataBuffer_UserId
public static final PropertyId DataBuffer_UserId
The user id associated to data buffer written by client when using Pull/Push audio mode streams. Added in version 1.5.0.
-
DataBuffer_TimeStamp
public static final PropertyId DataBuffer_TimeStamp
The time stamp associated to data buffer written by client when using Pull/Push audio mode streams. The time stamp is a 64-bit value with a resolution of 90 kHz. The same as the presentation timestamp in an MPEG transport stream. See https://en.wikipedia.org/wiki/Presentation_timestamp. Added in version 1.5.0.
-
PronunciationAssessment_ReferenceText
public static final PropertyId PronunciationAssessment_ReferenceText
The reference text of the audio for pronunciation evaluation. For this and the following pronunciation assessment parameters, see https://docs.microsoft.com/azure/cognitive-services/speech-service/rest-speech-to-text#pronunciation-assessment-parameters for details. Under normal circumstances, you shouldn't have to use this property directly. Added in version 1.14.0
-
PronunciationAssessment_GradingSystem
public static final PropertyId PronunciationAssessment_GradingSystem
The point system for pronunciation score calibration (FivePoint or HundredMark). Under normal circumstances, you shouldn't have to use this property directly. Added in version 1.14.0
-
PronunciationAssessment_Granularity
public static final PropertyId PronunciationAssessment_Granularity
The pronunciation evaluation granularity (Phoneme, Word, or FullText). Under normal circumstances, you shouldn't have to use this property directly. Added in version 1.14.0
-
PronunciationAssessment_EnableMiscue
public static final PropertyId PronunciationAssessment_EnableMiscue
Defines if enable miscue calculation. With this enabled, the pronounced words will be compared to the reference text, and will be marked with omission/insertion based on the comparison. The default setting is False. Under normal circumstances, you shouldn't have to use this property directly. Added in version 1.14.0
-
PronunciationAssessment_PhonemeAlphabet
public static final PropertyId PronunciationAssessment_PhonemeAlphabet
The pronunciation evaluation phoneme alphabet. The valid values are "SAPI" (default) and "IPA" Under normal circumstances, you shouldn't have to use this property directly. Instead, usePronunciationAssessmentConfig.setPhonemeAlphabet(java.lang.String). Added in version 1.20.0
-
PronunciationAssessment_NBestPhonemeCount
public static final PropertyId PronunciationAssessment_NBestPhonemeCount
The pronunciation evaluation nbest phoneme count. Under normal circumstances, you shouldn't have to use this property directly. Instead, usePronunciationAssessmentConfig.setNBestPhonemeCount(int). Added in version 1.20.0
-
PronunciationAssessment_EnableProsodyAssessment
public static final PropertyId PronunciationAssessment_EnableProsodyAssessment
Whether to enable prosody assessment. Under normal circumstances, you shouldn't have to use this property directly. instead, usePronunciationAssessmentConfig.enableProsodyAssessment(). Added in version 1.33.0
-
PronunciationAssessment_Json
public static final PropertyId PronunciationAssessment_Json
The json string of pronunciation assessment parameters Under normal circumstances, you shouldn't have to use this property directly. Added in version 1.14.0
-
PronunciationAssessment_Params
public static final PropertyId PronunciationAssessment_Params
Pronunciation assessment parameters. This property is intended to be read-only. The SDK is using it internally. Added in version 1.14.0
-
PronunciationAssessment_ContentTopic
public static final PropertyId PronunciationAssessment_ContentTopic
The content type of the pronunciation assessment. Under normal circumstances, you shouldn't have to use this property directly. instead, usePronunciationAssessmentConfig.enableContentAssessmentWithTopic(java.lang.String). Added in version 1.33.0
-
SpeakerRecognition_Api_Version
public static final PropertyId SpeakerRecognition_Api_Version
Version of Speaker Recognition to use. Added in version 1.18.0
-
SpeechTranslation_ModelName
public static final PropertyId SpeechTranslation_ModelName
The name of a model to be used for speech translation. Do not use this property directly. Currently this is only valid when EmbeddedSpeechConfig is used.
-
SpeechTranslation_ModelKey
public static final PropertyId SpeechTranslation_ModelKey
The decryption key of a model to be used for speech translation. Do not use this property directly. Currently this is only valid when EmbeddedSpeechConfig is used.
-
-
Method Detail
-
values
public static PropertyId[] values()
Returns an array containing the constants of this enum type, in the order they are declared. This method may be used to iterate over the constants as follows:for (PropertyId c : PropertyId.values()) System.out.println(c);
- Returns:
- an array containing the constants of this enum type, in the order they are declared
-
valueOf
public static PropertyId valueOf(String name)
Returns the enum constant of this type with the specified name. The string must match exactly an identifier used to declare an enum constant in this type. (Extraneous whitespace characters are not permitted.)- Parameters:
name- the name of the enum constant to be returned.- Returns:
- the enum constant with the specified name
- Throws:
IllegalArgumentException- if this enum type has no constant with the specified nameNullPointerException- if the argument is null
-
getValue
public int getValue()
Returns the internal value property id- Returns:
- the speech property id
-
-