We recommend new projects start with resources from the AWS provider.
aws-native.sagemaker.getInferenceComponent
Explore with Pulumi AI
We recommend new projects start with resources from the AWS provider.
Resource Type definition for AWS::SageMaker::InferenceComponent
Using getInferenceComponent
Two invocation forms are available. The direct form accepts plain arguments and either blocks until the result value is available, or returns a Promise-wrapped result. The output form accepts Input-wrapped arguments and returns an Output-wrapped result.
function getInferenceComponent(args: GetInferenceComponentArgs, opts?: InvokeOptions): Promise<GetInferenceComponentResult>
function getInferenceComponentOutput(args: GetInferenceComponentOutputArgs, opts?: InvokeOptions): Output<GetInferenceComponentResult>def get_inference_component(inference_component_arn: Optional[str] = None,
                            opts: Optional[InvokeOptions] = None) -> GetInferenceComponentResult
def get_inference_component_output(inference_component_arn: Optional[pulumi.Input[str]] = None,
                            opts: Optional[InvokeOptions] = None) -> Output[GetInferenceComponentResult]func LookupInferenceComponent(ctx *Context, args *LookupInferenceComponentArgs, opts ...InvokeOption) (*LookupInferenceComponentResult, error)
func LookupInferenceComponentOutput(ctx *Context, args *LookupInferenceComponentOutputArgs, opts ...InvokeOption) LookupInferenceComponentResultOutput> Note: This function is named LookupInferenceComponent in the Go SDK.
public static class GetInferenceComponent 
{
    public static Task<GetInferenceComponentResult> InvokeAsync(GetInferenceComponentArgs args, InvokeOptions? opts = null)
    public static Output<GetInferenceComponentResult> Invoke(GetInferenceComponentInvokeArgs args, InvokeOptions? opts = null)
}public static CompletableFuture<GetInferenceComponentResult> getInferenceComponent(GetInferenceComponentArgs args, InvokeOptions options)
public static Output<GetInferenceComponentResult> getInferenceComponent(GetInferenceComponentArgs args, InvokeOptions options)
fn::invoke:
  function: aws-native:sagemaker:getInferenceComponent
  arguments:
    # arguments dictionaryThe following arguments are supported:
- InferenceComponent stringArn 
- The Amazon Resource Name (ARN) of the inference component.
- InferenceComponent stringArn 
- The Amazon Resource Name (ARN) of the inference component.
- inferenceComponent StringArn 
- The Amazon Resource Name (ARN) of the inference component.
- inferenceComponent stringArn 
- The Amazon Resource Name (ARN) of the inference component.
- inference_component_ strarn 
- The Amazon Resource Name (ARN) of the inference component.
- inferenceComponent StringArn 
- The Amazon Resource Name (ARN) of the inference component.
getInferenceComponent Result
The following output properties are available:
- CreationTime string
- The time when the inference component was created.
- EndpointArn string
- The Amazon Resource Name (ARN) of the endpoint that hosts the inference component.
- EndpointName string
- The name of the endpoint that hosts the inference component.
- FailureReason string
- InferenceComponent stringArn 
- The Amazon Resource Name (ARN) of the inference component.
- InferenceComponent stringName 
- The name of the inference component.
- InferenceComponent Pulumi.Status Aws Native. Sage Maker. Inference Component Status 
- The status of the inference component.
- LastModified stringTime 
- The time when the inference component was last updated.
- RuntimeConfig Pulumi.Aws Native. Sage Maker. Outputs. Inference Component Runtime Config 
- Specification
Pulumi.Aws Native. Sage Maker. Outputs. Inference Component Specification 
- 
List<Pulumi.Aws Native. Outputs. Tag> 
- VariantName string
- The name of the production variant that hosts the inference component.
- CreationTime string
- The time when the inference component was created.
- EndpointArn string
- The Amazon Resource Name (ARN) of the endpoint that hosts the inference component.
- EndpointName string
- The name of the endpoint that hosts the inference component.
- FailureReason string
- InferenceComponent stringArn 
- The Amazon Resource Name (ARN) of the inference component.
- InferenceComponent stringName 
- The name of the inference component.
- InferenceComponent InferenceStatus Component Status 
- The status of the inference component.
- LastModified stringTime 
- The time when the inference component was last updated.
- RuntimeConfig InferenceComponent Runtime Config 
- Specification
InferenceComponent Specification 
- Tag
- VariantName string
- The name of the production variant that hosts the inference component.
- creationTime String
- The time when the inference component was created.
- endpointArn String
- The Amazon Resource Name (ARN) of the endpoint that hosts the inference component.
- endpointName String
- The name of the endpoint that hosts the inference component.
- failureReason String
- inferenceComponent StringArn 
- The Amazon Resource Name (ARN) of the inference component.
- inferenceComponent StringName 
- The name of the inference component.
- inferenceComponent InferenceStatus Component Status 
- The status of the inference component.
- lastModified StringTime 
- The time when the inference component was last updated.
- runtimeConfig InferenceComponent Runtime Config 
- specification
InferenceComponent Specification 
- List<Tag>
- variantName String
- The name of the production variant that hosts the inference component.
- creationTime string
- The time when the inference component was created.
- endpointArn string
- The Amazon Resource Name (ARN) of the endpoint that hosts the inference component.
- endpointName string
- The name of the endpoint that hosts the inference component.
- failureReason string
- inferenceComponent stringArn 
- The Amazon Resource Name (ARN) of the inference component.
- inferenceComponent stringName 
- The name of the inference component.
- inferenceComponent InferenceStatus Component Status 
- The status of the inference component.
- lastModified stringTime 
- The time when the inference component was last updated.
- runtimeConfig InferenceComponent Runtime Config 
- specification
InferenceComponent Specification 
- Tag[]
- variantName string
- The name of the production variant that hosts the inference component.
- creation_time str
- The time when the inference component was created.
- endpoint_arn str
- The Amazon Resource Name (ARN) of the endpoint that hosts the inference component.
- endpoint_name str
- The name of the endpoint that hosts the inference component.
- failure_reason str
- inference_component_ strarn 
- The Amazon Resource Name (ARN) of the inference component.
- inference_component_ strname 
- The name of the inference component.
- inference_component_ Inferencestatus Component Status 
- The status of the inference component.
- last_modified_ strtime 
- The time when the inference component was last updated.
- runtime_config InferenceComponent Runtime Config 
- specification
InferenceComponent Specification 
- Sequence[root_Tag]
- variant_name str
- The name of the production variant that hosts the inference component.
- creationTime String
- The time when the inference component was created.
- endpointArn String
- The Amazon Resource Name (ARN) of the endpoint that hosts the inference component.
- endpointName String
- The name of the endpoint that hosts the inference component.
- failureReason String
- inferenceComponent StringArn 
- The Amazon Resource Name (ARN) of the inference component.
- inferenceComponent StringName 
- The name of the inference component.
- inferenceComponent "InStatus Service" | "Creating" | "Updating" | "Failed" | "Deleting" 
- The status of the inference component.
- lastModified StringTime 
- The time when the inference component was last updated.
- runtimeConfig Property Map
- specification Property Map
- List<Property Map>
- variantName String
- The name of the production variant that hosts the inference component.
Supporting Types
InferenceComponentComputeResourceRequirements    
- MaxMemory intRequired In Mb 
- The maximum MB of memory to allocate to run a model that you assign to an inference component.
- MinMemory intRequired In Mb 
- The minimum MB of memory to allocate to run a model that you assign to an inference component.
- NumberOf doubleAccelerator Devices Required 
- The number of accelerators to allocate to run a model that you assign to an inference component. Accelerators include GPUs and AWS Inferentia.
- NumberOf doubleCpu Cores Required 
- The number of CPU cores to allocate to run a model that you assign to an inference component.
- MaxMemory intRequired In Mb 
- The maximum MB of memory to allocate to run a model that you assign to an inference component.
- MinMemory intRequired In Mb 
- The minimum MB of memory to allocate to run a model that you assign to an inference component.
- NumberOf float64Accelerator Devices Required 
- The number of accelerators to allocate to run a model that you assign to an inference component. Accelerators include GPUs and AWS Inferentia.
- NumberOf float64Cpu Cores Required 
- The number of CPU cores to allocate to run a model that you assign to an inference component.
- maxMemory IntegerRequired In Mb 
- The maximum MB of memory to allocate to run a model that you assign to an inference component.
- minMemory IntegerRequired In Mb 
- The minimum MB of memory to allocate to run a model that you assign to an inference component.
- numberOf DoubleAccelerator Devices Required 
- The number of accelerators to allocate to run a model that you assign to an inference component. Accelerators include GPUs and AWS Inferentia.
- numberOf DoubleCpu Cores Required 
- The number of CPU cores to allocate to run a model that you assign to an inference component.
- maxMemory numberRequired In Mb 
- The maximum MB of memory to allocate to run a model that you assign to an inference component.
- minMemory numberRequired In Mb 
- The minimum MB of memory to allocate to run a model that you assign to an inference component.
- numberOf numberAccelerator Devices Required 
- The number of accelerators to allocate to run a model that you assign to an inference component. Accelerators include GPUs and AWS Inferentia.
- numberOf numberCpu Cores Required 
- The number of CPU cores to allocate to run a model that you assign to an inference component.
- max_memory_ intrequired_ in_ mb 
- The maximum MB of memory to allocate to run a model that you assign to an inference component.
- min_memory_ intrequired_ in_ mb 
- The minimum MB of memory to allocate to run a model that you assign to an inference component.
- number_of_ floataccelerator_ devices_ required 
- The number of accelerators to allocate to run a model that you assign to an inference component. Accelerators include GPUs and AWS Inferentia.
- number_of_ floatcpu_ cores_ required 
- The number of CPU cores to allocate to run a model that you assign to an inference component.
- maxMemory NumberRequired In Mb 
- The maximum MB of memory to allocate to run a model that you assign to an inference component.
- minMemory NumberRequired In Mb 
- The minimum MB of memory to allocate to run a model that you assign to an inference component.
- numberOf NumberAccelerator Devices Required 
- The number of accelerators to allocate to run a model that you assign to an inference component. Accelerators include GPUs and AWS Inferentia.
- numberOf NumberCpu Cores Required 
- The number of CPU cores to allocate to run a model that you assign to an inference component.
InferenceComponentContainerSpecification   
- ArtifactUrl string
- The Amazon S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix).
- DeployedImage Pulumi.Aws Native. Sage Maker. Inputs. Inference Component Deployed Image 
- Environment Dictionary<string, string>
- The environment variables to set in the Docker container. Each key and value in the Environment string-to-string map can have length of up to 1024. We support up to 16 entries in the map.
- Image string
- The Amazon Elastic Container Registry (Amazon ECR) path where the Docker image for the model is stored.
- ArtifactUrl string
- The Amazon S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix).
- DeployedImage InferenceComponent Deployed Image 
- Environment map[string]string
- The environment variables to set in the Docker container. Each key and value in the Environment string-to-string map can have length of up to 1024. We support up to 16 entries in the map.
- Image string
- The Amazon Elastic Container Registry (Amazon ECR) path where the Docker image for the model is stored.
- artifactUrl String
- The Amazon S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix).
- deployedImage InferenceComponent Deployed Image 
- environment Map<String,String>
- The environment variables to set in the Docker container. Each key and value in the Environment string-to-string map can have length of up to 1024. We support up to 16 entries in the map.
- image String
- The Amazon Elastic Container Registry (Amazon ECR) path where the Docker image for the model is stored.
- artifactUrl string
- The Amazon S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix).
- deployedImage InferenceComponent Deployed Image 
- environment {[key: string]: string}
- The environment variables to set in the Docker container. Each key and value in the Environment string-to-string map can have length of up to 1024. We support up to 16 entries in the map.
- image string
- The Amazon Elastic Container Registry (Amazon ECR) path where the Docker image for the model is stored.
- artifact_url str
- The Amazon S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix).
- deployed_image InferenceComponent Deployed Image 
- environment Mapping[str, str]
- The environment variables to set in the Docker container. Each key and value in the Environment string-to-string map can have length of up to 1024. We support up to 16 entries in the map.
- image str
- The Amazon Elastic Container Registry (Amazon ECR) path where the Docker image for the model is stored.
- artifactUrl String
- The Amazon S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix).
- deployedImage Property Map
- environment Map<String>
- The environment variables to set in the Docker container. Each key and value in the Environment string-to-string map can have length of up to 1024. We support up to 16 entries in the map.
- image String
- The Amazon Elastic Container Registry (Amazon ECR) path where the Docker image for the model is stored.
InferenceComponentDeployedImage   
- ResolutionTime string
- The date and time when the image path for the model resolved to the ResolvedImage
- ResolvedImage string
- The specific digest path of the image hosted in this ProductionVariant.
- SpecifiedImage string
- The image path you specified when you created the model.
- ResolutionTime string
- The date and time when the image path for the model resolved to the ResolvedImage
- ResolvedImage string
- The specific digest path of the image hosted in this ProductionVariant.
- SpecifiedImage string
- The image path you specified when you created the model.
- resolutionTime String
- The date and time when the image path for the model resolved to the ResolvedImage
- resolvedImage String
- The specific digest path of the image hosted in this ProductionVariant.
- specifiedImage String
- The image path you specified when you created the model.
- resolutionTime string
- The date and time when the image path for the model resolved to the ResolvedImage
- resolvedImage string
- The specific digest path of the image hosted in this ProductionVariant.
- specifiedImage string
- The image path you specified when you created the model.
- resolution_time str
- The date and time when the image path for the model resolved to the ResolvedImage
- resolved_image str
- The specific digest path of the image hosted in this ProductionVariant.
- specified_image str
- The image path you specified when you created the model.
- resolutionTime String
- The date and time when the image path for the model resolved to the ResolvedImage
- resolvedImage String
- The specific digest path of the image hosted in this ProductionVariant.
- specifiedImage String
- The image path you specified when you created the model.
InferenceComponentRuntimeConfig   
- CopyCount int
- The number of runtime copies of the model container to deploy with the inference component. Each copy can serve inference requests.
- CurrentCopy intCount 
- DesiredCopy intCount 
- CopyCount int
- The number of runtime copies of the model container to deploy with the inference component. Each copy can serve inference requests.
- CurrentCopy intCount 
- DesiredCopy intCount 
- copyCount Integer
- The number of runtime copies of the model container to deploy with the inference component. Each copy can serve inference requests.
- currentCopy IntegerCount 
- desiredCopy IntegerCount 
- copyCount number
- The number of runtime copies of the model container to deploy with the inference component. Each copy can serve inference requests.
- currentCopy numberCount 
- desiredCopy numberCount 
- copy_count int
- The number of runtime copies of the model container to deploy with the inference component. Each copy can serve inference requests.
- current_copy_ intcount 
- desired_copy_ intcount 
- copyCount Number
- The number of runtime copies of the model container to deploy with the inference component. Each copy can serve inference requests.
- currentCopy NumberCount 
- desiredCopy NumberCount 
InferenceComponentSpecification  
- BaseInference stringComponent Name 
- The name of an existing inference component that is to contain the inference component that you're creating with your request. - Specify this parameter only if your request is meant to create an adapter inference component. An adapter inference component contains the path to an adapter model. The purpose of the adapter model is to tailor the inference output of a base foundation model, which is hosted by the base inference component. The adapter inference component uses the compute resources that you assigned to the base inference component. - When you create an adapter inference component, use the - Containerparameter to specify the location of the adapter artifacts. In the parameter value, use the- ArtifactUrlparameter of the- InferenceComponentContainerSpecificationdata type.- Before you can create an adapter inference component, you must have an existing inference component that contains the foundation model that you want to adapt. 
- ComputeResource Pulumi.Requirements Aws Native. Sage Maker. Inputs. Inference Component Compute Resource Requirements 
- The compute resources allocated to run the model, plus any adapter models, that you assign to the inference component. - Omit this parameter if your request is meant to create an adapter inference component. An adapter inference component is loaded by a base inference component, and it uses the compute resources of the base inference component. 
- Container
Pulumi.Aws Native. Sage Maker. Inputs. Inference Component Container Specification 
- Defines a container that provides the runtime environment for a model that you deploy with an inference component.
- ModelName string
- The name of an existing SageMaker AI model object in your account that you want to deploy with the inference component.
- StartupParameters Pulumi.Aws Native. Sage Maker. Inputs. Inference Component Startup Parameters 
- Settings that take effect while the model container starts up.
- BaseInference stringComponent Name 
- The name of an existing inference component that is to contain the inference component that you're creating with your request. - Specify this parameter only if your request is meant to create an adapter inference component. An adapter inference component contains the path to an adapter model. The purpose of the adapter model is to tailor the inference output of a base foundation model, which is hosted by the base inference component. The adapter inference component uses the compute resources that you assigned to the base inference component. - When you create an adapter inference component, use the - Containerparameter to specify the location of the adapter artifacts. In the parameter value, use the- ArtifactUrlparameter of the- InferenceComponentContainerSpecificationdata type.- Before you can create an adapter inference component, you must have an existing inference component that contains the foundation model that you want to adapt. 
- ComputeResource InferenceRequirements Component Compute Resource Requirements 
- The compute resources allocated to run the model, plus any adapter models, that you assign to the inference component. - Omit this parameter if your request is meant to create an adapter inference component. An adapter inference component is loaded by a base inference component, and it uses the compute resources of the base inference component. 
- Container
InferenceComponent Container Specification 
- Defines a container that provides the runtime environment for a model that you deploy with an inference component.
- ModelName string
- The name of an existing SageMaker AI model object in your account that you want to deploy with the inference component.
- StartupParameters InferenceComponent Startup Parameters 
- Settings that take effect while the model container starts up.
- baseInference StringComponent Name 
- The name of an existing inference component that is to contain the inference component that you're creating with your request. - Specify this parameter only if your request is meant to create an adapter inference component. An adapter inference component contains the path to an adapter model. The purpose of the adapter model is to tailor the inference output of a base foundation model, which is hosted by the base inference component. The adapter inference component uses the compute resources that you assigned to the base inference component. - When you create an adapter inference component, use the - Containerparameter to specify the location of the adapter artifacts. In the parameter value, use the- ArtifactUrlparameter of the- InferenceComponentContainerSpecificationdata type.- Before you can create an adapter inference component, you must have an existing inference component that contains the foundation model that you want to adapt. 
- computeResource InferenceRequirements Component Compute Resource Requirements 
- The compute resources allocated to run the model, plus any adapter models, that you assign to the inference component. - Omit this parameter if your request is meant to create an adapter inference component. An adapter inference component is loaded by a base inference component, and it uses the compute resources of the base inference component. 
- container
InferenceComponent Container Specification 
- Defines a container that provides the runtime environment for a model that you deploy with an inference component.
- modelName String
- The name of an existing SageMaker AI model object in your account that you want to deploy with the inference component.
- startupParameters InferenceComponent Startup Parameters 
- Settings that take effect while the model container starts up.
- baseInference stringComponent Name 
- The name of an existing inference component that is to contain the inference component that you're creating with your request. - Specify this parameter only if your request is meant to create an adapter inference component. An adapter inference component contains the path to an adapter model. The purpose of the adapter model is to tailor the inference output of a base foundation model, which is hosted by the base inference component. The adapter inference component uses the compute resources that you assigned to the base inference component. - When you create an adapter inference component, use the - Containerparameter to specify the location of the adapter artifacts. In the parameter value, use the- ArtifactUrlparameter of the- InferenceComponentContainerSpecificationdata type.- Before you can create an adapter inference component, you must have an existing inference component that contains the foundation model that you want to adapt. 
- computeResource InferenceRequirements Component Compute Resource Requirements 
- The compute resources allocated to run the model, plus any adapter models, that you assign to the inference component. - Omit this parameter if your request is meant to create an adapter inference component. An adapter inference component is loaded by a base inference component, and it uses the compute resources of the base inference component. 
- container
InferenceComponent Container Specification 
- Defines a container that provides the runtime environment for a model that you deploy with an inference component.
- modelName string
- The name of an existing SageMaker AI model object in your account that you want to deploy with the inference component.
- startupParameters InferenceComponent Startup Parameters 
- Settings that take effect while the model container starts up.
- base_inference_ strcomponent_ name 
- The name of an existing inference component that is to contain the inference component that you're creating with your request. - Specify this parameter only if your request is meant to create an adapter inference component. An adapter inference component contains the path to an adapter model. The purpose of the adapter model is to tailor the inference output of a base foundation model, which is hosted by the base inference component. The adapter inference component uses the compute resources that you assigned to the base inference component. - When you create an adapter inference component, use the - Containerparameter to specify the location of the adapter artifacts. In the parameter value, use the- ArtifactUrlparameter of the- InferenceComponentContainerSpecificationdata type.- Before you can create an adapter inference component, you must have an existing inference component that contains the foundation model that you want to adapt. 
- compute_resource_ Inferencerequirements Component Compute Resource Requirements 
- The compute resources allocated to run the model, plus any adapter models, that you assign to the inference component. - Omit this parameter if your request is meant to create an adapter inference component. An adapter inference component is loaded by a base inference component, and it uses the compute resources of the base inference component. 
- container
InferenceComponent Container Specification 
- Defines a container that provides the runtime environment for a model that you deploy with an inference component.
- model_name str
- The name of an existing SageMaker AI model object in your account that you want to deploy with the inference component.
- startup_parameters InferenceComponent Startup Parameters 
- Settings that take effect while the model container starts up.
- baseInference StringComponent Name 
- The name of an existing inference component that is to contain the inference component that you're creating with your request. - Specify this parameter only if your request is meant to create an adapter inference component. An adapter inference component contains the path to an adapter model. The purpose of the adapter model is to tailor the inference output of a base foundation model, which is hosted by the base inference component. The adapter inference component uses the compute resources that you assigned to the base inference component. - When you create an adapter inference component, use the - Containerparameter to specify the location of the adapter artifacts. In the parameter value, use the- ArtifactUrlparameter of the- InferenceComponentContainerSpecificationdata type.- Before you can create an adapter inference component, you must have an existing inference component that contains the foundation model that you want to adapt. 
- computeResource Property MapRequirements 
- The compute resources allocated to run the model, plus any adapter models, that you assign to the inference component. - Omit this parameter if your request is meant to create an adapter inference component. An adapter inference component is loaded by a base inference component, and it uses the compute resources of the base inference component. 
- container Property Map
- Defines a container that provides the runtime environment for a model that you deploy with an inference component.
- modelName String
- The name of an existing SageMaker AI model object in your account that you want to deploy with the inference component.
- startupParameters Property Map
- Settings that take effect while the model container starts up.
InferenceComponentStartupParameters   
- ContainerStartup intHealth Check Timeout In Seconds 
- The timeout value, in seconds, for your inference container to pass health check by Amazon S3 Hosting. For more information about health check, see How Your Container Should Respond to Health Check (Ping) Requests .
- ModelData intDownload Timeout In Seconds 
- The timeout value, in seconds, to download and extract the model that you want to host from Amazon S3 to the individual inference instance associated with this inference component.
- ContainerStartup intHealth Check Timeout In Seconds 
- The timeout value, in seconds, for your inference container to pass health check by Amazon S3 Hosting. For more information about health check, see How Your Container Should Respond to Health Check (Ping) Requests .
- ModelData intDownload Timeout In Seconds 
- The timeout value, in seconds, to download and extract the model that you want to host from Amazon S3 to the individual inference instance associated with this inference component.
- containerStartup IntegerHealth Check Timeout In Seconds 
- The timeout value, in seconds, for your inference container to pass health check by Amazon S3 Hosting. For more information about health check, see How Your Container Should Respond to Health Check (Ping) Requests .
- modelData IntegerDownload Timeout In Seconds 
- The timeout value, in seconds, to download and extract the model that you want to host from Amazon S3 to the individual inference instance associated with this inference component.
- containerStartup numberHealth Check Timeout In Seconds 
- The timeout value, in seconds, for your inference container to pass health check by Amazon S3 Hosting. For more information about health check, see How Your Container Should Respond to Health Check (Ping) Requests .
- modelData numberDownload Timeout In Seconds 
- The timeout value, in seconds, to download and extract the model that you want to host from Amazon S3 to the individual inference instance associated with this inference component.
- container_startup_ inthealth_ check_ timeout_ in_ seconds 
- The timeout value, in seconds, for your inference container to pass health check by Amazon S3 Hosting. For more information about health check, see How Your Container Should Respond to Health Check (Ping) Requests .
- model_data_ intdownload_ timeout_ in_ seconds 
- The timeout value, in seconds, to download and extract the model that you want to host from Amazon S3 to the individual inference instance associated with this inference component.
- containerStartup NumberHealth Check Timeout In Seconds 
- The timeout value, in seconds, for your inference container to pass health check by Amazon S3 Hosting. For more information about health check, see How Your Container Should Respond to Health Check (Ping) Requests .
- modelData NumberDownload Timeout In Seconds 
- The timeout value, in seconds, to download and extract the model that you want to host from Amazon S3 to the individual inference instance associated with this inference component.
InferenceComponentStatus  
Tag
Package Details
- Repository
- AWS Native pulumi/pulumi-aws-native
- License
- Apache-2.0
We recommend new projects start with resources from the AWS provider.