1. Packages
  2. Konnect Provider
  3. API Docs
  4. getGatewayPluginAiResponseTransformer
konnect 2.4.1 published on Thursday, Mar 13, 2025 by kong

konnect.getGatewayPluginAiResponseTransformer

Explore with Pulumi AI

konnect logo
konnect 2.4.1 published on Thursday, Mar 13, 2025 by kong

    Using getGatewayPluginAiResponseTransformer

    Two invocation forms are available. The direct form accepts plain arguments and either blocks until the result value is available, or returns a Promise-wrapped result. The output form accepts Input-wrapped arguments and returns an Output-wrapped result.

    function getGatewayPluginAiResponseTransformer(args: GetGatewayPluginAiResponseTransformerArgs, opts?: InvokeOptions): Promise<GetGatewayPluginAiResponseTransformerResult>
    function getGatewayPluginAiResponseTransformerOutput(args: GetGatewayPluginAiResponseTransformerOutputArgs, opts?: InvokeOptions): Output<GetGatewayPluginAiResponseTransformerResult>
    def get_gateway_plugin_ai_response_transformer(control_plane_id: Optional[str] = None,
                                                   opts: Optional[InvokeOptions] = None) -> GetGatewayPluginAiResponseTransformerResult
    def get_gateway_plugin_ai_response_transformer_output(control_plane_id: Optional[pulumi.Input[str]] = None,
                                                   opts: Optional[InvokeOptions] = None) -> Output[GetGatewayPluginAiResponseTransformerResult]
    func LookupGatewayPluginAiResponseTransformer(ctx *Context, args *LookupGatewayPluginAiResponseTransformerArgs, opts ...InvokeOption) (*LookupGatewayPluginAiResponseTransformerResult, error)
    func LookupGatewayPluginAiResponseTransformerOutput(ctx *Context, args *LookupGatewayPluginAiResponseTransformerOutputArgs, opts ...InvokeOption) LookupGatewayPluginAiResponseTransformerResultOutput

    > Note: This function is named LookupGatewayPluginAiResponseTransformer in the Go SDK.

    public static class GetGatewayPluginAiResponseTransformer 
    {
        public static Task<GetGatewayPluginAiResponseTransformerResult> InvokeAsync(GetGatewayPluginAiResponseTransformerArgs args, InvokeOptions? opts = null)
        public static Output<GetGatewayPluginAiResponseTransformerResult> Invoke(GetGatewayPluginAiResponseTransformerInvokeArgs args, InvokeOptions? opts = null)
    }
    public static CompletableFuture<GetGatewayPluginAiResponseTransformerResult> getGatewayPluginAiResponseTransformer(GetGatewayPluginAiResponseTransformerArgs args, InvokeOptions options)
    public static Output<GetGatewayPluginAiResponseTransformerResult> getGatewayPluginAiResponseTransformer(GetGatewayPluginAiResponseTransformerArgs args, InvokeOptions options)
    
    fn::invoke:
      function: konnect:index/getGatewayPluginAiResponseTransformer:getGatewayPluginAiResponseTransformer
      arguments:
        # arguments dictionary

    The following arguments are supported:

    getGatewayPluginAiResponseTransformer Result

    The following output properties are available:

    Supporting Types

    GetGatewayPluginAiResponseTransformerConfig

    HttpProxyHost string
    A string representing a host name, such as example.com.
    HttpProxyPort double
    An integer representing a port number between 0 and 65535, inclusive.
    HttpTimeout double
    Timeout in milliseconds for the AI upstream service.
    HttpsProxyHost string
    A string representing a host name, such as example.com.
    HttpsProxyPort double
    An integer representing a port number between 0 and 65535, inclusive.
    HttpsVerify bool
    Verify the TLS certificate of the AI upstream service.
    Llm GetGatewayPluginAiResponseTransformerConfigLlm
    MaxRequestBodySize double
    max allowed body size allowed to be introspected
    ParseLlmResponseJsonInstructions bool
    Set true to read specific response format from the LLM, and accordingly set the status code / body / headers that proxy back to the client. You need to engineer your LLM prompt to return the correct format, see plugin docs 'Overview' page for usage instructions.
    Prompt string
    Use this prompt to tune the LLM system/assistant message for the returning proxy response (from the upstream), adn what response format you are expecting.
    TransformationExtractPattern string
    Defines the regular expression that must match to indicate a successful AI transformation at the response phase. The first match will be set as the returning body. If the AI service's response doesn't match this pattern, a failure is returned to the client.
    HttpProxyHost string
    A string representing a host name, such as example.com.
    HttpProxyPort float64
    An integer representing a port number between 0 and 65535, inclusive.
    HttpTimeout float64
    Timeout in milliseconds for the AI upstream service.
    HttpsProxyHost string
    A string representing a host name, such as example.com.
    HttpsProxyPort float64
    An integer representing a port number between 0 and 65535, inclusive.
    HttpsVerify bool
    Verify the TLS certificate of the AI upstream service.
    Llm GetGatewayPluginAiResponseTransformerConfigLlm
    MaxRequestBodySize float64
    max allowed body size allowed to be introspected
    ParseLlmResponseJsonInstructions bool
    Set true to read specific response format from the LLM, and accordingly set the status code / body / headers that proxy back to the client. You need to engineer your LLM prompt to return the correct format, see plugin docs 'Overview' page for usage instructions.
    Prompt string
    Use this prompt to tune the LLM system/assistant message for the returning proxy response (from the upstream), adn what response format you are expecting.
    TransformationExtractPattern string
    Defines the regular expression that must match to indicate a successful AI transformation at the response phase. The first match will be set as the returning body. If the AI service's response doesn't match this pattern, a failure is returned to the client.
    httpProxyHost String
    A string representing a host name, such as example.com.
    httpProxyPort Double
    An integer representing a port number between 0 and 65535, inclusive.
    httpTimeout Double
    Timeout in milliseconds for the AI upstream service.
    httpsProxyHost String
    A string representing a host name, such as example.com.
    httpsProxyPort Double
    An integer representing a port number between 0 and 65535, inclusive.
    httpsVerify Boolean
    Verify the TLS certificate of the AI upstream service.
    llm GetGatewayPluginAiResponseTransformerConfigLlm
    maxRequestBodySize Double
    max allowed body size allowed to be introspected
    parseLlmResponseJsonInstructions Boolean
    Set true to read specific response format from the LLM, and accordingly set the status code / body / headers that proxy back to the client. You need to engineer your LLM prompt to return the correct format, see plugin docs 'Overview' page for usage instructions.
    prompt String
    Use this prompt to tune the LLM system/assistant message for the returning proxy response (from the upstream), adn what response format you are expecting.
    transformationExtractPattern String
    Defines the regular expression that must match to indicate a successful AI transformation at the response phase. The first match will be set as the returning body. If the AI service's response doesn't match this pattern, a failure is returned to the client.
    httpProxyHost string
    A string representing a host name, such as example.com.
    httpProxyPort number
    An integer representing a port number between 0 and 65535, inclusive.
    httpTimeout number
    Timeout in milliseconds for the AI upstream service.
    httpsProxyHost string
    A string representing a host name, such as example.com.
    httpsProxyPort number
    An integer representing a port number between 0 and 65535, inclusive.
    httpsVerify boolean
    Verify the TLS certificate of the AI upstream service.
    llm GetGatewayPluginAiResponseTransformerConfigLlm
    maxRequestBodySize number
    max allowed body size allowed to be introspected
    parseLlmResponseJsonInstructions boolean
    Set true to read specific response format from the LLM, and accordingly set the status code / body / headers that proxy back to the client. You need to engineer your LLM prompt to return the correct format, see plugin docs 'Overview' page for usage instructions.
    prompt string
    Use this prompt to tune the LLM system/assistant message for the returning proxy response (from the upstream), adn what response format you are expecting.
    transformationExtractPattern string
    Defines the regular expression that must match to indicate a successful AI transformation at the response phase. The first match will be set as the returning body. If the AI service's response doesn't match this pattern, a failure is returned to the client.
    http_proxy_host str
    A string representing a host name, such as example.com.
    http_proxy_port float
    An integer representing a port number between 0 and 65535, inclusive.
    http_timeout float
    Timeout in milliseconds for the AI upstream service.
    https_proxy_host str
    A string representing a host name, such as example.com.
    https_proxy_port float
    An integer representing a port number between 0 and 65535, inclusive.
    https_verify bool
    Verify the TLS certificate of the AI upstream service.
    llm GetGatewayPluginAiResponseTransformerConfigLlm
    max_request_body_size float
    max allowed body size allowed to be introspected
    parse_llm_response_json_instructions bool
    Set true to read specific response format from the LLM, and accordingly set the status code / body / headers that proxy back to the client. You need to engineer your LLM prompt to return the correct format, see plugin docs 'Overview' page for usage instructions.
    prompt str
    Use this prompt to tune the LLM system/assistant message for the returning proxy response (from the upstream), adn what response format you are expecting.
    transformation_extract_pattern str
    Defines the regular expression that must match to indicate a successful AI transformation at the response phase. The first match will be set as the returning body. If the AI service's response doesn't match this pattern, a failure is returned to the client.
    httpProxyHost String
    A string representing a host name, such as example.com.
    httpProxyPort Number
    An integer representing a port number between 0 and 65535, inclusive.
    httpTimeout Number
    Timeout in milliseconds for the AI upstream service.
    httpsProxyHost String
    A string representing a host name, such as example.com.
    httpsProxyPort Number
    An integer representing a port number between 0 and 65535, inclusive.
    httpsVerify Boolean
    Verify the TLS certificate of the AI upstream service.
    llm Property Map
    maxRequestBodySize Number
    max allowed body size allowed to be introspected
    parseLlmResponseJsonInstructions Boolean
    Set true to read specific response format from the LLM, and accordingly set the status code / body / headers that proxy back to the client. You need to engineer your LLM prompt to return the correct format, see plugin docs 'Overview' page for usage instructions.
    prompt String
    Use this prompt to tune the LLM system/assistant message for the returning proxy response (from the upstream), adn what response format you are expecting.
    transformationExtractPattern String
    Defines the regular expression that must match to indicate a successful AI transformation at the response phase. The first match will be set as the returning body. If the AI service's response doesn't match this pattern, a failure is returned to the client.

    GetGatewayPluginAiResponseTransformerConfigLlm

    Auth GetGatewayPluginAiResponseTransformerConfigLlmAuth
    Logging GetGatewayPluginAiResponseTransformerConfigLlmLogging
    Model GetGatewayPluginAiResponseTransformerConfigLlmModel
    RouteType string
    The model's operation implementation, for this provider. Set to preserve to pass through without transformation.
    Auth GetGatewayPluginAiResponseTransformerConfigLlmAuth
    Logging GetGatewayPluginAiResponseTransformerConfigLlmLogging
    Model GetGatewayPluginAiResponseTransformerConfigLlmModel
    RouteType string
    The model's operation implementation, for this provider. Set to preserve to pass through without transformation.
    auth GetGatewayPluginAiResponseTransformerConfigLlmAuth
    logging GetGatewayPluginAiResponseTransformerConfigLlmLogging
    model GetGatewayPluginAiResponseTransformerConfigLlmModel
    routeType String
    The model's operation implementation, for this provider. Set to preserve to pass through without transformation.
    auth GetGatewayPluginAiResponseTransformerConfigLlmAuth
    logging GetGatewayPluginAiResponseTransformerConfigLlmLogging
    model GetGatewayPluginAiResponseTransformerConfigLlmModel
    routeType string
    The model's operation implementation, for this provider. Set to preserve to pass through without transformation.
    auth Property Map
    logging Property Map
    model Property Map
    routeType String
    The model's operation implementation, for this provider. Set to preserve to pass through without transformation.

    GetGatewayPluginAiResponseTransformerConfigLlmAuth

    AllowOverride bool
    If enabled, the authorization header or parameter can be overridden in the request by the value configured in the plugin.
    AwsAccessKeyId string
    Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_ACCESS_KEY_ID environment variable for this plugin instance.
    AwsSecretAccessKey string
    Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_SECRET_ACCESS_KEY environment variable for this plugin instance.
    AzureClientId string
    If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client ID.
    AzureClientSecret string
    If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client secret.
    AzureTenantId string
    If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the tenant ID.
    AzureUseManagedIdentity bool
    Set true to use the Azure Cloud Managed Identity (or user-assigned identity) to authenticate with Azure-provider models.
    GcpServiceAccountJson string
    Set this field to the full JSON of the GCP service account to authenticate, if required. If null (and gcp_use_service_account is true), Kong will attempt to read from environment variable GCP_SERVICE_ACCOUNT.
    GcpUseServiceAccount bool
    Use service account auth for GCP-based providers and models.
    HeaderName string
    If AI model requires authentication via Authorization or API key header, specify its name here.
    HeaderValue string
    Specify the full auth header value for 'header_name', for example 'Bearer key' or just 'key'.
    ParamLocation string
    Specify whether the 'param_name' and 'param_value' options go in a query string, or the POST form/JSON body.
    ParamName string
    If AI model requires authentication via query parameter, specify its name here.
    ParamValue string
    Specify the full parameter value for 'param_name'.
    AllowOverride bool
    If enabled, the authorization header or parameter can be overridden in the request by the value configured in the plugin.
    AwsAccessKeyId string
    Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_ACCESS_KEY_ID environment variable for this plugin instance.
    AwsSecretAccessKey string
    Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_SECRET_ACCESS_KEY environment variable for this plugin instance.
    AzureClientId string
    If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client ID.
    AzureClientSecret string
    If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client secret.
    AzureTenantId string
    If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the tenant ID.
    AzureUseManagedIdentity bool
    Set true to use the Azure Cloud Managed Identity (or user-assigned identity) to authenticate with Azure-provider models.
    GcpServiceAccountJson string
    Set this field to the full JSON of the GCP service account to authenticate, if required. If null (and gcp_use_service_account is true), Kong will attempt to read from environment variable GCP_SERVICE_ACCOUNT.
    GcpUseServiceAccount bool
    Use service account auth for GCP-based providers and models.
    HeaderName string
    If AI model requires authentication via Authorization or API key header, specify its name here.
    HeaderValue string
    Specify the full auth header value for 'header_name', for example 'Bearer key' or just 'key'.
    ParamLocation string
    Specify whether the 'param_name' and 'param_value' options go in a query string, or the POST form/JSON body.
    ParamName string
    If AI model requires authentication via query parameter, specify its name here.
    ParamValue string
    Specify the full parameter value for 'param_name'.
    allowOverride Boolean
    If enabled, the authorization header or parameter can be overridden in the request by the value configured in the plugin.
    awsAccessKeyId String
    Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_ACCESS_KEY_ID environment variable for this plugin instance.
    awsSecretAccessKey String
    Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_SECRET_ACCESS_KEY environment variable for this plugin instance.
    azureClientId String
    If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client ID.
    azureClientSecret String
    If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client secret.
    azureTenantId String
    If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the tenant ID.
    azureUseManagedIdentity Boolean
    Set true to use the Azure Cloud Managed Identity (or user-assigned identity) to authenticate with Azure-provider models.
    gcpServiceAccountJson String
    Set this field to the full JSON of the GCP service account to authenticate, if required. If null (and gcp_use_service_account is true), Kong will attempt to read from environment variable GCP_SERVICE_ACCOUNT.
    gcpUseServiceAccount Boolean
    Use service account auth for GCP-based providers and models.
    headerName String
    If AI model requires authentication via Authorization or API key header, specify its name here.
    headerValue String
    Specify the full auth header value for 'header_name', for example 'Bearer key' or just 'key'.
    paramLocation String
    Specify whether the 'param_name' and 'param_value' options go in a query string, or the POST form/JSON body.
    paramName String
    If AI model requires authentication via query parameter, specify its name here.
    paramValue String
    Specify the full parameter value for 'param_name'.
    allowOverride boolean
    If enabled, the authorization header or parameter can be overridden in the request by the value configured in the plugin.
    awsAccessKeyId string
    Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_ACCESS_KEY_ID environment variable for this plugin instance.
    awsSecretAccessKey string
    Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_SECRET_ACCESS_KEY environment variable for this plugin instance.
    azureClientId string
    If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client ID.
    azureClientSecret string
    If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client secret.
    azureTenantId string
    If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the tenant ID.
    azureUseManagedIdentity boolean
    Set true to use the Azure Cloud Managed Identity (or user-assigned identity) to authenticate with Azure-provider models.
    gcpServiceAccountJson string
    Set this field to the full JSON of the GCP service account to authenticate, if required. If null (and gcp_use_service_account is true), Kong will attempt to read from environment variable GCP_SERVICE_ACCOUNT.
    gcpUseServiceAccount boolean
    Use service account auth for GCP-based providers and models.
    headerName string
    If AI model requires authentication via Authorization or API key header, specify its name here.
    headerValue string
    Specify the full auth header value for 'header_name', for example 'Bearer key' or just 'key'.
    paramLocation string
    Specify whether the 'param_name' and 'param_value' options go in a query string, or the POST form/JSON body.
    paramName string
    If AI model requires authentication via query parameter, specify its name here.
    paramValue string
    Specify the full parameter value for 'param_name'.
    allow_override bool
    If enabled, the authorization header or parameter can be overridden in the request by the value configured in the plugin.
    aws_access_key_id str
    Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_ACCESS_KEY_ID environment variable for this plugin instance.
    aws_secret_access_key str
    Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_SECRET_ACCESS_KEY environment variable for this plugin instance.
    azure_client_id str
    If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client ID.
    azure_client_secret str
    If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client secret.
    azure_tenant_id str
    If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the tenant ID.
    azure_use_managed_identity bool
    Set true to use the Azure Cloud Managed Identity (or user-assigned identity) to authenticate with Azure-provider models.
    gcp_service_account_json str
    Set this field to the full JSON of the GCP service account to authenticate, if required. If null (and gcp_use_service_account is true), Kong will attempt to read from environment variable GCP_SERVICE_ACCOUNT.
    gcp_use_service_account bool
    Use service account auth for GCP-based providers and models.
    header_name str
    If AI model requires authentication via Authorization or API key header, specify its name here.
    header_value str
    Specify the full auth header value for 'header_name', for example 'Bearer key' or just 'key'.
    param_location str
    Specify whether the 'param_name' and 'param_value' options go in a query string, or the POST form/JSON body.
    param_name str
    If AI model requires authentication via query parameter, specify its name here.
    param_value str
    Specify the full parameter value for 'param_name'.
    allowOverride Boolean
    If enabled, the authorization header or parameter can be overridden in the request by the value configured in the plugin.
    awsAccessKeyId String
    Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_ACCESS_KEY_ID environment variable for this plugin instance.
    awsSecretAccessKey String
    Set this if you are using an AWS provider (Bedrock) and you are authenticating using static IAM User credentials. Setting this will override the AWS_SECRET_ACCESS_KEY environment variable for this plugin instance.
    azureClientId String
    If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client ID.
    azureClientSecret String
    If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the client secret.
    azureTenantId String
    If azure_use_managed_identity is set to true, and you need to use a different user-assigned identity for this LLM instance, set the tenant ID.
    azureUseManagedIdentity Boolean
    Set true to use the Azure Cloud Managed Identity (or user-assigned identity) to authenticate with Azure-provider models.
    gcpServiceAccountJson String
    Set this field to the full JSON of the GCP service account to authenticate, if required. If null (and gcp_use_service_account is true), Kong will attempt to read from environment variable GCP_SERVICE_ACCOUNT.
    gcpUseServiceAccount Boolean
    Use service account auth for GCP-based providers and models.
    headerName String
    If AI model requires authentication via Authorization or API key header, specify its name here.
    headerValue String
    Specify the full auth header value for 'header_name', for example 'Bearer key' or just 'key'.
    paramLocation String
    Specify whether the 'param_name' and 'param_value' options go in a query string, or the POST form/JSON body.
    paramName String
    If AI model requires authentication via query parameter, specify its name here.
    paramValue String
    Specify the full parameter value for 'param_name'.

    GetGatewayPluginAiResponseTransformerConfigLlmLogging

    LogPayloads bool
    If enabled, will log the request and response body into the Kong log plugin(s) output.
    LogStatistics bool
    If enabled and supported by the driver, will add model usage and token metrics into the Kong log plugin(s) output.
    LogPayloads bool
    If enabled, will log the request and response body into the Kong log plugin(s) output.
    LogStatistics bool
    If enabled and supported by the driver, will add model usage and token metrics into the Kong log plugin(s) output.
    logPayloads Boolean
    If enabled, will log the request and response body into the Kong log plugin(s) output.
    logStatistics Boolean
    If enabled and supported by the driver, will add model usage and token metrics into the Kong log plugin(s) output.
    logPayloads boolean
    If enabled, will log the request and response body into the Kong log plugin(s) output.
    logStatistics boolean
    If enabled and supported by the driver, will add model usage and token metrics into the Kong log plugin(s) output.
    log_payloads bool
    If enabled, will log the request and response body into the Kong log plugin(s) output.
    log_statistics bool
    If enabled and supported by the driver, will add model usage and token metrics into the Kong log plugin(s) output.
    logPayloads Boolean
    If enabled, will log the request and response body into the Kong log plugin(s) output.
    logStatistics Boolean
    If enabled and supported by the driver, will add model usage and token metrics into the Kong log plugin(s) output.

    GetGatewayPluginAiResponseTransformerConfigLlmModel

    Name string
    Model name to execute.
    Options GetGatewayPluginAiResponseTransformerConfigLlmModelOptions
    Key/value settings for the model
    Provider string
    AI provider request format - Kong translates requests to and from the specified backend compatible formats.
    Name string
    Model name to execute.
    Options GetGatewayPluginAiResponseTransformerConfigLlmModelOptions
    Key/value settings for the model
    Provider string
    AI provider request format - Kong translates requests to and from the specified backend compatible formats.
    name String
    Model name to execute.
    options GetGatewayPluginAiResponseTransformerConfigLlmModelOptions
    Key/value settings for the model
    provider String
    AI provider request format - Kong translates requests to and from the specified backend compatible formats.
    name string
    Model name to execute.
    options GetGatewayPluginAiResponseTransformerConfigLlmModelOptions
    Key/value settings for the model
    provider string
    AI provider request format - Kong translates requests to and from the specified backend compatible formats.
    name str
    Model name to execute.
    options GetGatewayPluginAiResponseTransformerConfigLlmModelOptions
    Key/value settings for the model
    provider str
    AI provider request format - Kong translates requests to and from the specified backend compatible formats.
    name String
    Model name to execute.
    options Property Map
    Key/value settings for the model
    provider String
    AI provider request format - Kong translates requests to and from the specified backend compatible formats.

    GetGatewayPluginAiResponseTransformerConfigLlmModelOptions

    AnthropicVersion string
    Defines the schema/API version, if using Anthropic provider.
    AzureApiVersion string
    'api-version' for Azure OpenAI instances.
    AzureDeploymentId string
    Deployment ID for Azure OpenAI instances.
    AzureInstance string
    Instance name for Azure OpenAI hosted models.
    Bedrock GetGatewayPluginAiResponseTransformerConfigLlmModelOptionsBedrock
    Gemini GetGatewayPluginAiResponseTransformerConfigLlmModelOptionsGemini
    Huggingface GetGatewayPluginAiResponseTransformerConfigLlmModelOptionsHuggingface
    InputCost double
    Defines the cost per 1M tokens in your prompt.
    Llama2Format string
    If using llama2 provider, select the upstream message format.
    MaxTokens double
    Defines the max_tokens, if using chat or completion models.
    MistralFormat string
    If using mistral provider, select the upstream message format.
    OutputCost double
    Defines the cost per 1M tokens in the output of the AI.
    Temperature double
    Defines the matching temperature, if using chat or completion models.
    TopK double
    Defines the top-k most likely tokens, if supported.
    TopP double
    Defines the top-p probability mass, if supported.
    UpstreamPath string
    Manually specify or override the AI operation path, used when e.g. using the 'preserve' route_type.
    UpstreamUrl string
    Manually specify or override the full URL to the AI operation endpoints, when calling (self-)hosted models, or for running via a private endpoint.
    AnthropicVersion string
    Defines the schema/API version, if using Anthropic provider.
    AzureApiVersion string
    'api-version' for Azure OpenAI instances.
    AzureDeploymentId string
    Deployment ID for Azure OpenAI instances.
    AzureInstance string
    Instance name for Azure OpenAI hosted models.
    Bedrock GetGatewayPluginAiResponseTransformerConfigLlmModelOptionsBedrock
    Gemini GetGatewayPluginAiResponseTransformerConfigLlmModelOptionsGemini
    Huggingface GetGatewayPluginAiResponseTransformerConfigLlmModelOptionsHuggingface
    InputCost float64
    Defines the cost per 1M tokens in your prompt.
    Llama2Format string
    If using llama2 provider, select the upstream message format.
    MaxTokens float64
    Defines the max_tokens, if using chat or completion models.
    MistralFormat string
    If using mistral provider, select the upstream message format.
    OutputCost float64
    Defines the cost per 1M tokens in the output of the AI.
    Temperature float64
    Defines the matching temperature, if using chat or completion models.
    TopK float64
    Defines the top-k most likely tokens, if supported.
    TopP float64
    Defines the top-p probability mass, if supported.
    UpstreamPath string
    Manually specify or override the AI operation path, used when e.g. using the 'preserve' route_type.
    UpstreamUrl string
    Manually specify or override the full URL to the AI operation endpoints, when calling (self-)hosted models, or for running via a private endpoint.
    anthropicVersion String
    Defines the schema/API version, if using Anthropic provider.
    azureApiVersion String
    'api-version' for Azure OpenAI instances.
    azureDeploymentId String
    Deployment ID for Azure OpenAI instances.
    azureInstance String
    Instance name for Azure OpenAI hosted models.
    bedrock GetGatewayPluginAiResponseTransformerConfigLlmModelOptionsBedrock
    gemini GetGatewayPluginAiResponseTransformerConfigLlmModelOptionsGemini
    huggingface GetGatewayPluginAiResponseTransformerConfigLlmModelOptionsHuggingface
    inputCost Double
    Defines the cost per 1M tokens in your prompt.
    llama2Format String
    If using llama2 provider, select the upstream message format.
    maxTokens Double
    Defines the max_tokens, if using chat or completion models.
    mistralFormat String
    If using mistral provider, select the upstream message format.
    outputCost Double
    Defines the cost per 1M tokens in the output of the AI.
    temperature Double
    Defines the matching temperature, if using chat or completion models.
    topK Double
    Defines the top-k most likely tokens, if supported.
    topP Double
    Defines the top-p probability mass, if supported.
    upstreamPath String
    Manually specify or override the AI operation path, used when e.g. using the 'preserve' route_type.
    upstreamUrl String
    Manually specify or override the full URL to the AI operation endpoints, when calling (self-)hosted models, or for running via a private endpoint.
    anthropicVersion string
    Defines the schema/API version, if using Anthropic provider.
    azureApiVersion string
    'api-version' for Azure OpenAI instances.
    azureDeploymentId string
    Deployment ID for Azure OpenAI instances.
    azureInstance string
    Instance name for Azure OpenAI hosted models.
    bedrock GetGatewayPluginAiResponseTransformerConfigLlmModelOptionsBedrock
    gemini GetGatewayPluginAiResponseTransformerConfigLlmModelOptionsGemini
    huggingface GetGatewayPluginAiResponseTransformerConfigLlmModelOptionsHuggingface
    inputCost number
    Defines the cost per 1M tokens in your prompt.
    llama2Format string
    If using llama2 provider, select the upstream message format.
    maxTokens number
    Defines the max_tokens, if using chat or completion models.
    mistralFormat string
    If using mistral provider, select the upstream message format.
    outputCost number
    Defines the cost per 1M tokens in the output of the AI.
    temperature number
    Defines the matching temperature, if using chat or completion models.
    topK number
    Defines the top-k most likely tokens, if supported.
    topP number
    Defines the top-p probability mass, if supported.
    upstreamPath string
    Manually specify or override the AI operation path, used when e.g. using the 'preserve' route_type.
    upstreamUrl string
    Manually specify or override the full URL to the AI operation endpoints, when calling (self-)hosted models, or for running via a private endpoint.
    anthropic_version str
    Defines the schema/API version, if using Anthropic provider.
    azure_api_version str
    'api-version' for Azure OpenAI instances.
    azure_deployment_id str
    Deployment ID for Azure OpenAI instances.
    azure_instance str
    Instance name for Azure OpenAI hosted models.
    bedrock GetGatewayPluginAiResponseTransformerConfigLlmModelOptionsBedrock
    gemini GetGatewayPluginAiResponseTransformerConfigLlmModelOptionsGemini
    huggingface GetGatewayPluginAiResponseTransformerConfigLlmModelOptionsHuggingface
    input_cost float
    Defines the cost per 1M tokens in your prompt.
    llama2_format str
    If using llama2 provider, select the upstream message format.
    max_tokens float
    Defines the max_tokens, if using chat or completion models.
    mistral_format str
    If using mistral provider, select the upstream message format.
    output_cost float
    Defines the cost per 1M tokens in the output of the AI.
    temperature float
    Defines the matching temperature, if using chat or completion models.
    top_k float
    Defines the top-k most likely tokens, if supported.
    top_p float
    Defines the top-p probability mass, if supported.
    upstream_path str
    Manually specify or override the AI operation path, used when e.g. using the 'preserve' route_type.
    upstream_url str
    Manually specify or override the full URL to the AI operation endpoints, when calling (self-)hosted models, or for running via a private endpoint.
    anthropicVersion String
    Defines the schema/API version, if using Anthropic provider.
    azureApiVersion String
    'api-version' for Azure OpenAI instances.
    azureDeploymentId String
    Deployment ID for Azure OpenAI instances.
    azureInstance String
    Instance name for Azure OpenAI hosted models.
    bedrock Property Map
    gemini Property Map
    huggingface Property Map
    inputCost Number
    Defines the cost per 1M tokens in your prompt.
    llama2Format String
    If using llama2 provider, select the upstream message format.
    maxTokens Number
    Defines the max_tokens, if using chat or completion models.
    mistralFormat String
    If using mistral provider, select the upstream message format.
    outputCost Number
    Defines the cost per 1M tokens in the output of the AI.
    temperature Number
    Defines the matching temperature, if using chat or completion models.
    topK Number
    Defines the top-k most likely tokens, if supported.
    topP Number
    Defines the top-p probability mass, if supported.
    upstreamPath String
    Manually specify or override the AI operation path, used when e.g. using the 'preserve' route_type.
    upstreamUrl String
    Manually specify or override the full URL to the AI operation endpoints, when calling (self-)hosted models, or for running via a private endpoint.

    GetGatewayPluginAiResponseTransformerConfigLlmModelOptionsBedrock

    AwsRegion string
    If using AWS providers (Bedrock) you can override the AWS_REGION environment variable by setting this option.
    AwsRegion string
    If using AWS providers (Bedrock) you can override the AWS_REGION environment variable by setting this option.
    awsRegion String
    If using AWS providers (Bedrock) you can override the AWS_REGION environment variable by setting this option.
    awsRegion string
    If using AWS providers (Bedrock) you can override the AWS_REGION environment variable by setting this option.
    aws_region str
    If using AWS providers (Bedrock) you can override the AWS_REGION environment variable by setting this option.
    awsRegion String
    If using AWS providers (Bedrock) you can override the AWS_REGION environment variable by setting this option.

    GetGatewayPluginAiResponseTransformerConfigLlmModelOptionsGemini

    ApiEndpoint string
    If running Gemini on Vertex, specify the regional API endpoint (hostname only).
    LocationId string
    If running Gemini on Vertex, specify the location ID.
    ProjectId string
    If running Gemini on Vertex, specify the project ID.
    ApiEndpoint string
    If running Gemini on Vertex, specify the regional API endpoint (hostname only).
    LocationId string
    If running Gemini on Vertex, specify the location ID.
    ProjectId string
    If running Gemini on Vertex, specify the project ID.
    apiEndpoint String
    If running Gemini on Vertex, specify the regional API endpoint (hostname only).
    locationId String
    If running Gemini on Vertex, specify the location ID.
    projectId String
    If running Gemini on Vertex, specify the project ID.
    apiEndpoint string
    If running Gemini on Vertex, specify the regional API endpoint (hostname only).
    locationId string
    If running Gemini on Vertex, specify the location ID.
    projectId string
    If running Gemini on Vertex, specify the project ID.
    api_endpoint str
    If running Gemini on Vertex, specify the regional API endpoint (hostname only).
    location_id str
    If running Gemini on Vertex, specify the location ID.
    project_id str
    If running Gemini on Vertex, specify the project ID.
    apiEndpoint String
    If running Gemini on Vertex, specify the regional API endpoint (hostname only).
    locationId String
    If running Gemini on Vertex, specify the location ID.
    projectId String
    If running Gemini on Vertex, specify the project ID.

    GetGatewayPluginAiResponseTransformerConfigLlmModelOptionsHuggingface

    UseCache bool
    Use the cache layer on the inference API
    WaitForModel bool
    Wait for the model if it is not ready
    UseCache bool
    Use the cache layer on the inference API
    WaitForModel bool
    Wait for the model if it is not ready
    useCache Boolean
    Use the cache layer on the inference API
    waitForModel Boolean
    Wait for the model if it is not ready
    useCache boolean
    Use the cache layer on the inference API
    waitForModel boolean
    Wait for the model if it is not ready
    use_cache bool
    Use the cache layer on the inference API
    wait_for_model bool
    Wait for the model if it is not ready
    useCache Boolean
    Use the cache layer on the inference API
    waitForModel Boolean
    Wait for the model if it is not ready

    GetGatewayPluginAiResponseTransformerConsumer

    Id string
    Id string
    id String
    id string
    id str
    id String

    GetGatewayPluginAiResponseTransformerConsumerGroup

    Id string
    Id string
    id String
    id string
    id str
    id String

    GetGatewayPluginAiResponseTransformerOrdering

    GetGatewayPluginAiResponseTransformerOrderingAfter

    Accesses List<string>
    Accesses []string
    accesses List<String>
    accesses string[]
    accesses Sequence[str]
    accesses List<String>

    GetGatewayPluginAiResponseTransformerOrderingBefore

    Accesses List<string>
    Accesses []string
    accesses List<String>
    accesses string[]
    accesses Sequence[str]
    accesses List<String>

    GetGatewayPluginAiResponseTransformerRoute

    Id string
    Id string
    id String
    id string
    id str
    id String

    GetGatewayPluginAiResponseTransformerService

    Id string
    Id string
    id String
    id string
    id str
    id String

    Package Details

    Repository
    konnect kong/terraform-provider-konnect
    License
    Notes
    This Pulumi package is based on the konnect Terraform Provider.
    konnect logo
    konnect 2.4.1 published on Thursday, Mar 13, 2025 by kong