Enum PropertyId

    • Enum Constant Detail

      • SpeechServiceConnection_Key

        public static final PropertyId SpeechServiceConnection_Key
        The Cognitive Services Speech Service subscription key. If you are using an intent recognizer, you need to specify the LUIS endpoint key for your particular LUIS app. Under normal circumstances, you shouldn't have to use this property directly. Instead, use SpeechConfig.fromSubscription(java.lang.String, java.lang.String)
      • SpeechServiceConnection_Endpoint

        public static final PropertyId SpeechServiceConnection_Endpoint
        The Cognitive Services Speech Service endpoint (url). Under normal circumstances, you shouldn't have to use this property directly. Instead, use SpeechConfig.fromEndpoint(java.net.URI, java.lang.String)
        See Also:
        "NOTE: This endpoint is not the same as the endpoint used to obtain an access token."
      • SpeechServiceAuthorization_Type

        public static final PropertyId SpeechServiceAuthorization_Type
        The Cognitive Services Speech Service authorization type. Currently unused.
      • SpeechServiceConnection_EndpointId

        public static final PropertyId SpeechServiceConnection_EndpointId
        The Cognitive Services Custom Speech or Custom Voice Service endpoint id. Under normal circumstances, you shouldn't have to use this property directly. Instead use
              SpeechConfig.setEndpointId(java.lang.String).
        
         NOTE: The endpoint id is available in the Custom Speech Portal, listed under Endpoint Details.
         
      • SpeechServiceConnection_Url

        public static final PropertyId SpeechServiceConnection_Url
        The URL string built from speech configuration. This property is intended to be read-only. The SDK is using it internally. NOTE: Added in version 1.5.0.
      • SpeechServiceConnection_TranslationVoice

        public static final PropertyId SpeechServiceConnection_TranslationVoice
        The name of the Cognitive Service Text to Speech Service voice. Under normal circumstances, you shouldn't have to use this property directly. Instead use SpeechTranslationConfig.setVoiceName(java.lang.String). NOTE: Valid voice names can be found here.
      • SpeechServiceConnection_TranslationFeatures

        public static final PropertyId SpeechServiceConnection_TranslationFeatures
        Translation features. For internal use.
      • SpeechServiceConnection_IntentRegion

        public static final PropertyId SpeechServiceConnection_IntentRegion
        The Language Understanding Service region. Under normal circumstances, you shouldn't have to use this property directly. Instead, use LanguageUnderstandingModel.
      • SpeechServiceConnection_RecoMode

        public static final PropertyId SpeechServiceConnection_RecoMode
        The Cognitive Services Speech Service recognition mode. Can be "INTERACTIVE", "CONVERSATION", "DICTATION". This property is intended to be read-only. The SDK is using it internally.
      • Speech_SessionId

        public static final PropertyId Speech_SessionId
        The session id. This id is a universally unique identifier (aka UUID) representing a specific binding of an audio input stream and the underlying speech recognition instance to which its bound. Under normal circumstances, you shouldn't have to use this property directly. Instead use SessionEventArgs.getSessionId().
      • SpeechServiceConnection_RecoBackend

        public static final PropertyId SpeechServiceConnection_RecoBackend
        The string to specify the backend to be used for speech recognition; allowed options are online and offline. Under normal circumstances, you shouldn't use this property directly. Currently the offline option is only valid when EmbeddedSpeechConfig is used. Added in version 1.19.0
      • SpeechServiceConnection_RecoModelName

        public static final PropertyId SpeechServiceConnection_RecoModelName
        The name of the model to be used for speech recognition. Under normal circumstances, you shouldn't use this property directly. Currently this is only valid when EmbeddedSpeechConfig is used. Added in version 1.19.0
      • SpeechServiceConnection_RecoModelKey

        public static final PropertyId SpeechServiceConnection_RecoModelKey
        The decryption key of the model to be used for speech recognition. Under normal circumstances, you shouldn't use this property directly. Currently this is only valid when EmbeddedSpeechConfig is used. Added in version 1.19.0
      • SpeechServiceConnection_SynthLanguage

        public static final PropertyId SpeechServiceConnection_SynthLanguage
        The spoken language to be synthesized (e.g. en-US) Added in version 1.7.0
      • SpeechServiceConnection_SynthVoice

        public static final PropertyId SpeechServiceConnection_SynthVoice
        The name of the TTS voice to be used for speech synthesis Added in version 1.7.0
      • SpeechServiceConnection_SynthOutputFormat

        public static final PropertyId SpeechServiceConnection_SynthOutputFormat
        The string to specify TTS output audio format (e.g. riff-16khz-16bit-mono-pcm) Added in version 1.7.0
      • SpeechServiceConnection_SynthEnableCompressedAudioTransmission

        public static final PropertyId SpeechServiceConnection_SynthEnableCompressedAudioTransmission
        Indicates if use compressed audio format for speech synthesis audio transmission. This property only affects when SpeechServiceConnection_SynthOutputFormat is set to a pcm format. If this property is not set and GStreamer is available, SDK will use compressed format for synthesized audio transmission, and decode it. You can set this property to "false" to use raw pcm format for transmission on wire. Added in version 1.16.0
      • SpeechServiceConnection_VoicesListEndpoint

        public static final PropertyId SpeechServiceConnection_VoicesListEndpoint
        The Cognitive Services Speech Service voices list api endpoint (url). Under normal circumstances, you don't need to specify this property, SDK will construct it based on the region/host/endpoint of SpeechConfig. Added in version 1.16.0
      • SpeechServiceConnection_InitialSilenceTimeoutMs

        public static final PropertyId SpeechServiceConnection_InitialSilenceTimeoutMs
        The initial silence timeout value (in milliseconds) used by the service. Added in version 1.5.0
      • SpeechServiceConnection_EndSilenceTimeoutMs

        public static final PropertyId SpeechServiceConnection_EndSilenceTimeoutMs
        The end silence timeout value (in milliseconds) used by the service. Added in version 1.5.0
      • SpeechServiceConnection_EnableAudioLogging

        public static final PropertyId SpeechServiceConnection_EnableAudioLogging
        A boolean value specifying whether audio logging is enabled in the service or not. Audio and content logs are stored either in Microsoft-owned storage, or in your own storage account linked to your Cognitive Services subscription (Bring Your Own Storage (BYOS) enabled Speech resource). Added in version 1.5.0
      • SpeechServiceConnection_LanguageIdMode

        public static final PropertyId SpeechServiceConnection_LanguageIdMode
        The speech service connection language identifier mode. Can be "AtStart" (the default), or "Continuous". See Language Identification document. Added in 1.25.0
      • SpeechServiceResponse_RequestProfanityFilterTrueFalse

        public static final PropertyId SpeechServiceResponse_RequestProfanityFilterTrueFalse
        The requested Cognitive Services Speech Service response output profanity level. Currently unused.
      • SpeechServiceResponse_ProfanityOption

        public static final PropertyId SpeechServiceResponse_ProfanityOption
        The requested Cognitive Services Speech Service response output profanity setting. Allowed values are "masked", "removed", and "raw". Added in version 1.5.0.
      • SpeechServiceResponse_PostProcessingOption

        public static final PropertyId SpeechServiceResponse_PostProcessingOption
        A string value specifying which post processing option should be used by service. Allowed values are "TrueText". Added in version 1.5.0
      • SpeechServiceResponse_RequestWordLevelTimestamps

        public static final PropertyId SpeechServiceResponse_RequestWordLevelTimestamps
        A boolean value specifying whether to include word-level timestamps in the response result. Added in version 1.5.0
      • SpeechServiceResponse_StablePartialResultThreshold

        public static final PropertyId SpeechServiceResponse_StablePartialResultThreshold
        The number of times a word has to be in partial results to be returned. Added in version 1.5.0
      • SpeechServiceResponse_OutputFormatOption

        public static final PropertyId SpeechServiceResponse_OutputFormatOption
        A string value specifying the output format option in the response result. Internal use only. Added in version 1.5.0.
      • SpeechServiceResponse_RequestSnr

        public static final PropertyId SpeechServiceResponse_RequestSnr
        A boolean value specifying whether to include SNR (signal to noise ratio) in the response result. Added in version 1.18.0
      • SpeechServiceResponse_TranslationRequestStablePartialResult

        public static final PropertyId SpeechServiceResponse_TranslationRequestStablePartialResult
        A boolean value to request for stabilizing translation partial results by omitting words in the end. Added in version 1.5.0.
      • SpeechServiceResponse_RequestWordBoundary

        public static final PropertyId SpeechServiceResponse_RequestWordBoundary
        A boolean value specifying whether to request WordBoundary events. Added in version 1.21.0.
      • SpeechServiceResponse_RequestPunctuationBoundary

        public static final PropertyId SpeechServiceResponse_RequestPunctuationBoundary
        A boolean value specifying whether to request punctuation boundary in WordBoundary Events. Default is true. Added in version 1.21.0.
      • SpeechServiceResponse_RequestSentenceBoundary

        public static final PropertyId SpeechServiceResponse_RequestSentenceBoundary
        A boolean value specifying whether to request sentence boundary in WordBoundary Events. Default is false. Added in version 1.21.0.
      • SpeechServiceResponse_SynthesisEventsSyncToAudio

        public static final PropertyId SpeechServiceResponse_SynthesisEventsSyncToAudio
        A boolean value specifying whether the SDK should synchronize synthesis metadata events, (e.g. word boundary, viseme, etc.) to the audio playback. This only takes effect when the audio is played through the SDK. Default is true. If set to false, the SDK will fire the events as they come from the service, which may be out of sync with the audio playback. Added in version 1.31.0.
      • SpeechServiceResponse_JsonResult

        public static final PropertyId SpeechServiceResponse_JsonResult
        The Cognitive Services Speech Service response output (in JSON format). This property is available on recognition result objects only.
      • SpeechServiceResponse_JsonErrorDetails

        public static final PropertyId SpeechServiceResponse_JsonErrorDetails
        The Cognitive Services Speech Service error details (in JSON format). Under normal circumstances, you shouldn't have to use this property directly. Instead, use CancellationDetails.getErrorDetails().
      • SpeechServiceResponse_RecognitionLatencyMs

        public static final PropertyId SpeechServiceResponse_RecognitionLatencyMs
        The recognition latency in milliseconds. Read-only, available on final speech/translation/intent results. This measures the latency between when an audio input is received by the SDK, and the moment the final result is received from the service. The SDK computes the time difference between the last audio fragment from the audio input that is contributing to the final result, and the time the final result is received from the speech service. Added in version 1.3.0.
      • SpeechServiceResponse_RecognitionBackend

        public static final PropertyId SpeechServiceResponse_RecognitionBackend
        The recognition backend. Read-only, available on speech recognition results. This indicates whether cloud (online) or embedded (offline) recognition was used to produce the result.
      • SpeechServiceResponse_SynthesisFirstByteLatencyMs

        public static final PropertyId SpeechServiceResponse_SynthesisFirstByteLatencyMs
        The speech synthesis first byte latency in milliseconds. Read-only, available on final speech synthesis results. This measures the latency between when the synthesis is started to be processed, and the moment the first byte audio is available. Added in version 1.17.0.
      • SpeechServiceResponse_SynthesisFinishLatencyMs

        public static final PropertyId SpeechServiceResponse_SynthesisFinishLatencyMs
        The speech synthesis all bytes latency in milliseconds. Read-only, available on final speech synthesis results. This measures the latency between when the synthesis is started to be processed, and the moment the whole audio is synthesized. Added in version 1.17.0.
      • SpeechServiceResponse_SynthesisUnderrunTimeMs

        public static final PropertyId SpeechServiceResponse_SynthesisUnderrunTimeMs
        The underrun time for speech synthesis in milliseconds. Read-only, available on results in SynthesisCompleted events. This measures the total underrun time from AudioConfig_PlaybackBufferLengthInMs is filled to synthesis completed. Added in version 1.17.0.
      • SpeechServiceResponse_SynthesisConnectionLatencyMs

        public static final PropertyId SpeechServiceResponse_SynthesisConnectionLatencyMs
        The speech synthesis connection latency in milliseconds. Read-only, available on final speech synthesis results. This measures the latency between when the synthesis is started to be processed, and the moment the HTTP/WebSocket connection is established. Added in version 1.26.0
      • SpeechServiceResponse_SynthesisNetworkLatencyMs

        public static final PropertyId SpeechServiceResponse_SynthesisNetworkLatencyMs
        The speech synthesis network latency in milliseconds. Read-only, available on final speech synthesis results. This measures the network round trip time. Added in version 1.26.0
      • SpeechServiceResponse_SynthesisServiceLatencyMs

        public static final PropertyId SpeechServiceResponse_SynthesisServiceLatencyMs
        The speech synthesis service latency in milliseconds. Read-only, available on final speech synthesis results. This measures the service processing time to synthesize the first byte of audio. Added in version 1.26.0
      • SpeechServiceResponse_SynthesisBackend

        public static final PropertyId SpeechServiceResponse_SynthesisBackend
        Indicates which backend the synthesis is finished by. Read-only, available on speech synthesis results, except for the result in SynthesisStarted event. Added in version 1.17.0.
      • CancellationDetails_Reason

        public static final PropertyId CancellationDetails_Reason
        The cancellation reason. Currently unused.
      • CancellationDetails_ReasonText

        public static final PropertyId CancellationDetails_ReasonText
        The cancellation text. Currently unused.
      • CancellationDetails_ReasonDetailedText

        public static final PropertyId CancellationDetails_ReasonDetailedText
        The cancellation detailed text. Currently unused.
      • LanguageUnderstandingServiceResponse_JsonResult

        public static final PropertyId LanguageUnderstandingServiceResponse_JsonResult
        The Language Understanding Service response output (in JSON format). Available via IntentRecognitionResult.toString().
      • AudioConfig_DeviceNameForRender

        public static final PropertyId AudioConfig_DeviceNameForRender
        The device name for audio render. Under normal circumstances, you shouldn't have to use this property directly. Instead, use AudioConfig.fromSpeakerOutput(java.lang.String). Added in version 1.17.0
      • AudioConfig_PlaybackBufferLengthInMs

        public static final PropertyId AudioConfig_PlaybackBufferLengthInMs
        Playback buffer length in milliseconds, default is 50 milliseconds.
      • AudioConfig_AudioProcessingOptions

        public static final PropertyId AudioConfig_AudioProcessingOptions
        Audio processing options in JSON format.
      • Speech_LogFilename

        public static final PropertyId Speech_LogFilename
        The file name to write logs. Added in version 1.4.0.
      • Speech_SegmentationSilenceTimeoutMs

        public static final PropertyId Speech_SegmentationSilenceTimeoutMs
        A duration of detected silence, measured in milliseconds, after which speech-to-text will determine a spoken phrase has ended and generate a final Recognized result. Configuring this timeout may be helpful in situations where spoken input is significantly faster or slower than usual and default segmentation behavior consistently yields results that are too long or too short. Segmentation timeout values that are inappropriately high or low can negatively affect speech-to-text accuracy; this property should be carefully configured and the resulting behavior should be thoroughly validated as intended. For more information about timeout configuration that includes discussion of default behaviors, please visit https://aka.ms/csspeech/timeouts.
      • Conversation_ApplicationId

        public static final PropertyId Conversation_ApplicationId
        Identifier used to connect to the backend service. Added in version 1.5.0.
      • Conversation_DialogType

        public static final PropertyId Conversation_DialogType
        Type of dialog backend to connect to. Added in version 1.7.0.
      • Conversation_Initial_Silence_Timeout

        public static final PropertyId Conversation_Initial_Silence_Timeout
        Silence timeout for listening Added in version 1.5.0.
      • Conversation_From_Id

        public static final PropertyId Conversation_From_Id
        From id to be used on speech recognition activities Added in version 1.5.0.
      • Conversation_Conversation_Id

        public static final PropertyId Conversation_Conversation_Id
        ConversationId for the session. Added in version 1.8.0.
      • Conversation_Custom_Voice_Deployment_Ids

        public static final PropertyId Conversation_Custom_Voice_Deployment_Ids
        Comma separated list of custom voice deployment ids. Added in version 1.8.0.
      • Conversation_Speech_Activity_Template

        public static final PropertyId Conversation_Speech_Activity_Template
        Speech activity template, stamp properties in the template on the activity generated by the service for speech. Added in version 1.10.0.
      • Conversation_Request_Bot_Status_Messages

        public static final PropertyId Conversation_Request_Bot_Status_Messages
        A boolean value that specifies whether the client should receive status messages and generate corresponding turnStatusReceived events. Defaults to true. Added in version 1.15.0.
      • Conversation_Connection_Id

        public static final PropertyId Conversation_Connection_Id
        Additional identifying information, such as a Direct Line token, used to authenticate with the backend service. Added in version 1.16.0.
      • SpeechServiceConnection_AutoDetectSourceLanguages

        public static final PropertyId SpeechServiceConnection_AutoDetectSourceLanguages
        The auto detect source languages Added in version 1.8.0.
      • SpeechServiceConnection_AutoDetectSourceLanguageResult

        public static final PropertyId SpeechServiceConnection_AutoDetectSourceLanguageResult
        The auto detect source language result Added in version 1.8.0.
      • DataBuffer_UserId

        public static final PropertyId DataBuffer_UserId
        The user id associated to data buffer written by client when using Pull/Push audio mode streams. Added in version 1.5.0.
      • DataBuffer_TimeStamp

        public static final PropertyId DataBuffer_TimeStamp
        The time stamp associated to data buffer written by client when using Pull/Push audio mode streams. The time stamp is a 64-bit value with a resolution of 90 kHz. The same as the presentation timestamp in an MPEG transport stream. See https://en.wikipedia.org/wiki/Presentation_timestamp. Added in version 1.5.0.
      • PronunciationAssessment_ReferenceText

        public static final PropertyId PronunciationAssessment_ReferenceText
        The reference text of the audio for pronunciation evaluation. For this and the following pronunciation assessment parameters, see https://docs.microsoft.com/azure/cognitive-services/speech-service/rest-speech-to-text#pronunciation-assessment-parameters for details. Under normal circumstances, you shouldn't have to use this property directly. Added in version 1.14.0
      • PronunciationAssessment_GradingSystem

        public static final PropertyId PronunciationAssessment_GradingSystem
        The point system for pronunciation score calibration (FivePoint or HundredMark). Under normal circumstances, you shouldn't have to use this property directly. Added in version 1.14.0
      • PronunciationAssessment_Granularity

        public static final PropertyId PronunciationAssessment_Granularity
        The pronunciation evaluation granularity (Phoneme, Word, or FullText). Under normal circumstances, you shouldn't have to use this property directly. Added in version 1.14.0
      • PronunciationAssessment_EnableMiscue

        public static final PropertyId PronunciationAssessment_EnableMiscue
        Defines if enable miscue calculation. With this enabled, the pronounced words will be compared to the reference text, and will be marked with omission/insertion based on the comparison. The default setting is False. Under normal circumstances, you shouldn't have to use this property directly. Added in version 1.14.0
      • PronunciationAssessment_PhonemeAlphabet

        public static final PropertyId PronunciationAssessment_PhonemeAlphabet
        The pronunciation evaluation phoneme alphabet. The valid values are "SAPI" (default) and "IPA" Under normal circumstances, you shouldn't have to use this property directly. Instead, use PronunciationAssessmentConfig.setPhonemeAlphabet(java.lang.String). Added in version 1.20.0
      • PronunciationAssessment_NBestPhonemeCount

        public static final PropertyId PronunciationAssessment_NBestPhonemeCount
        The pronunciation evaluation nbest phoneme count. Under normal circumstances, you shouldn't have to use this property directly. Instead, use PronunciationAssessmentConfig.setNBestPhonemeCount(int). Added in version 1.20.0
      • PronunciationAssessment_EnableProsodyAssessment

        public static final PropertyId PronunciationAssessment_EnableProsodyAssessment
        Whether to enable prosody assessment. Under normal circumstances, you shouldn't have to use this property directly. instead, use PronunciationAssessmentConfig.enableProsodyAssessment(). Added in version 1.33.0
      • PronunciationAssessment_Json

        public static final PropertyId PronunciationAssessment_Json
        The json string of pronunciation assessment parameters Under normal circumstances, you shouldn't have to use this property directly. Added in version 1.14.0
      • PronunciationAssessment_Params

        public static final PropertyId PronunciationAssessment_Params
        Pronunciation assessment parameters. This property is intended to be read-only. The SDK is using it internally. Added in version 1.14.0
      • SpeakerRecognition_Api_Version

        public static final PropertyId SpeakerRecognition_Api_Version
        Version of Speaker Recognition to use. Added in version 1.18.0
      • SpeechTranslation_ModelName

        public static final PropertyId SpeechTranslation_ModelName
        The name of a model to be used for speech translation. Do not use this property directly. Currently this is only valid when EmbeddedSpeechConfig is used.
      • SpeechTranslation_ModelKey

        public static final PropertyId SpeechTranslation_ModelKey
        The decryption key of a model to be used for speech translation. Do not use this property directly. Currently this is only valid when EmbeddedSpeechConfig is used.
    • Method Detail

      • values

        public static PropertyId[] values()
        Returns an array containing the constants of this enum type, in the order they are declared. This method may be used to iterate over the constants as follows:
        for (PropertyId c : PropertyId.values())
            System.out.println(c);
        
        Returns:
        an array containing the constants of this enum type, in the order they are declared
      • valueOf

        public static PropertyId valueOf​(String name)
        Returns the enum constant of this type with the specified name. The string must match exactly an identifier used to declare an enum constant in this type. (Extraneous whitespace characters are not permitted.)
        Parameters:
        name - the name of the enum constant to be returned.
        Returns:
        the enum constant with the specified name
        Throws:
        IllegalArgumentException - if this enum type has no constant with the specified name
        NullPointerException - if the argument is null
      • getValue

        public int getValue()
        Returns the internal value property id
        Returns:
        the speech property id