1. Packages
  2. Airbyte Provider
  3. API Docs
  4. DestinationIceberg
airbyte 0.8.0-beta2 published on Thursday, Mar 27, 2025 by airbytehq

airbyte.DestinationIceberg

Explore with Pulumi AI

airbyte logo
airbyte 0.8.0-beta2 published on Thursday, Mar 27, 2025 by airbytehq

    DestinationIceberg Resource

    Example Usage

    Coming soon!
    
    Coming soon!
    
    Coming soon!
    
    Coming soon!
    
    package generated_program;
    
    import com.pulumi.Context;
    import com.pulumi.Pulumi;
    import com.pulumi.core.Output;
    import com.pulumi.airbyte.DestinationIceberg;
    import com.pulumi.airbyte.DestinationIcebergArgs;
    import com.pulumi.airbyte.inputs.DestinationIcebergConfigurationArgs;
    import java.util.List;
    import java.util.ArrayList;
    import java.util.Map;
    import java.io.File;
    import java.nio.file.Files;
    import java.nio.file.Paths;
    
    public class App {
        public static void main(String[] args) {
            Pulumi.run(App::stack);
        }
    
        public static void stack(Context ctx) {
            var myDestinationIceberg = new DestinationIceberg("myDestinationIceberg", DestinationIcebergArgs.builder()
                .configuration(DestinationIcebergConfigurationArgs.builder()
                    .catalog_config(%!v(PANIC=Format method: runtime error: invalid memory address or nil pointer dereference))
                    .format_config(%!v(PANIC=Format method: runtime error: invalid memory address or nil pointer dereference))
                    .storage_config(%!v(PANIC=Format method: runtime error: invalid memory address or nil pointer dereference))
                    .build())
                .definitionId("263446c4-43e9-45cc-ac60-4398823f5d7f")
                .workspaceId("a348c0e2-12a2-4320-9af6-f59e32031847")
                .build());
    
        }
    }
    
    resources:
      myDestinationIceberg:
        type: airbyte:DestinationIceberg
        properties:
          configuration:
            catalog_config:
              glueCatalog:
                catalogType: Glue
                database: public
              hadoopCatalogUseHierarchicalFileSystemsAsSameAsStorageConfig:
                catalogType: Hadoop
                database: default
            format_config:
              autoCompact: true
              compactTargetFileSizeInMb: 9
              flushBatchSize: 8
              format: Parquet
            storage_config:
              serverManaged:
                managedWarehouseName: '...my_managed_warehouse_name...'
                storageType: MANAGED
          definitionId: 263446c4-43e9-45cc-ac60-4398823f5d7f
          workspaceId: a348c0e2-12a2-4320-9af6-f59e32031847
    

    Create DestinationIceberg Resource

    Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.

    Constructor syntax

    new DestinationIceberg(name: string, args: DestinationIcebergArgs, opts?: CustomResourceOptions);
    @overload
    def DestinationIceberg(resource_name: str,
                           args: DestinationIcebergArgs,
                           opts: Optional[ResourceOptions] = None)
    
    @overload
    def DestinationIceberg(resource_name: str,
                           opts: Optional[ResourceOptions] = None,
                           configuration: Optional[DestinationIcebergConfigurationArgs] = None,
                           workspace_id: Optional[str] = None,
                           definition_id: Optional[str] = None,
                           name: Optional[str] = None)
    func NewDestinationIceberg(ctx *Context, name string, args DestinationIcebergArgs, opts ...ResourceOption) (*DestinationIceberg, error)
    public DestinationIceberg(string name, DestinationIcebergArgs args, CustomResourceOptions? opts = null)
    public DestinationIceberg(String name, DestinationIcebergArgs args)
    public DestinationIceberg(String name, DestinationIcebergArgs args, CustomResourceOptions options)
    
    type: airbyte:DestinationIceberg
    properties: # The arguments to resource properties.
    options: # Bag of options to control resource's behavior.
    
    

    Parameters

    name string
    The unique name of the resource.
    args DestinationIcebergArgs
    The arguments to resource properties.
    opts CustomResourceOptions
    Bag of options to control resource's behavior.
    resource_name str
    The unique name of the resource.
    args DestinationIcebergArgs
    The arguments to resource properties.
    opts ResourceOptions
    Bag of options to control resource's behavior.
    ctx Context
    Context object for the current deployment.
    name string
    The unique name of the resource.
    args DestinationIcebergArgs
    The arguments to resource properties.
    opts ResourceOption
    Bag of options to control resource's behavior.
    name string
    The unique name of the resource.
    args DestinationIcebergArgs
    The arguments to resource properties.
    opts CustomResourceOptions
    Bag of options to control resource's behavior.
    name String
    The unique name of the resource.
    args DestinationIcebergArgs
    The arguments to resource properties.
    options CustomResourceOptions
    Bag of options to control resource's behavior.

    Constructor example

    The following reference example uses placeholder values for all input properties.

    var destinationIcebergResource = new Airbyte.DestinationIceberg("destinationIcebergResource", new()
    {
        Configuration = new Airbyte.Inputs.DestinationIcebergConfigurationArgs
        {
            CatalogConfig = new Airbyte.Inputs.DestinationIcebergConfigurationCatalogConfigArgs
            {
                GlueCatalog = new Airbyte.Inputs.DestinationIcebergConfigurationCatalogConfigGlueCatalogArgs
                {
                    CatalogType = "string",
                    Database = "string",
                },
                HadoopCatalogUseHierarchicalFileSystemsAsSameAsStorageConfig = new Airbyte.Inputs.DestinationIcebergConfigurationCatalogConfigHadoopCatalogUseHierarchicalFileSystemsAsSameAsStorageConfigArgs
                {
                    CatalogType = "string",
                    Database = "string",
                },
                HiveCatalogUseApacheHiveMetaStore = new Airbyte.Inputs.DestinationIcebergConfigurationCatalogConfigHiveCatalogUseApacheHiveMetaStoreArgs
                {
                    HiveThriftUri = "string",
                    CatalogType = "string",
                    Database = "string",
                },
                JdbcCatalogUseRelationalDatabase = new Airbyte.Inputs.DestinationIcebergConfigurationCatalogConfigJdbcCatalogUseRelationalDatabaseArgs
                {
                    CatalogSchema = "string",
                    CatalogType = "string",
                    Database = "string",
                    JdbcUrl = "string",
                    Password = "string",
                    Ssl = false,
                    Username = "string",
                },
                RestCatalog = new Airbyte.Inputs.DestinationIcebergConfigurationCatalogConfigRestCatalogArgs
                {
                    RestUri = "string",
                    CatalogType = "string",
                    RestCredential = "string",
                    RestToken = "string",
                },
            },
            FormatConfig = new Airbyte.Inputs.DestinationIcebergConfigurationFormatConfigArgs
            {
                AutoCompact = false,
                CompactTargetFileSizeInMb = 0,
                FlushBatchSize = 0,
                Format = "string",
            },
            StorageConfig = new Airbyte.Inputs.DestinationIcebergConfigurationStorageConfigArgs
            {
                S3 = new Airbyte.Inputs.DestinationIcebergConfigurationStorageConfigS3Args
                {
                    AccessKeyId = "string",
                    S3WarehouseUri = "string",
                    SecretAccessKey = "string",
                    S3BucketRegion = "string",
                    S3Endpoint = "string",
                    S3PathStyleAccess = false,
                    StorageType = "string",
                },
                ServerManaged = new Airbyte.Inputs.DestinationIcebergConfigurationStorageConfigServerManagedArgs
                {
                    ManagedWarehouseName = "string",
                    StorageType = "string",
                },
            },
        },
        WorkspaceId = "string",
        DefinitionId = "string",
        Name = "string",
    });
    
    example, err := airbyte.NewDestinationIceberg(ctx, "destinationIcebergResource", &airbyte.DestinationIcebergArgs{
    Configuration: &.DestinationIcebergConfigurationArgs{
    CatalogConfig: &.DestinationIcebergConfigurationCatalogConfigArgs{
    GlueCatalog: &.DestinationIcebergConfigurationCatalogConfigGlueCatalogArgs{
    CatalogType: pulumi.String("string"),
    Database: pulumi.String("string"),
    },
    HadoopCatalogUseHierarchicalFileSystemsAsSameAsStorageConfig: &.DestinationIcebergConfigurationCatalogConfigHadoopCatalogUseHierarchicalFileSystemsAsSameAsStorageConfigArgs{
    CatalogType: pulumi.String("string"),
    Database: pulumi.String("string"),
    },
    HiveCatalogUseApacheHiveMetaStore: &.DestinationIcebergConfigurationCatalogConfigHiveCatalogUseApacheHiveMetaStoreArgs{
    HiveThriftUri: pulumi.String("string"),
    CatalogType: pulumi.String("string"),
    Database: pulumi.String("string"),
    },
    JdbcCatalogUseRelationalDatabase: &.DestinationIcebergConfigurationCatalogConfigJdbcCatalogUseRelationalDatabaseArgs{
    CatalogSchema: pulumi.String("string"),
    CatalogType: pulumi.String("string"),
    Database: pulumi.String("string"),
    JdbcUrl: pulumi.String("string"),
    Password: pulumi.String("string"),
    Ssl: pulumi.Bool(false),
    Username: pulumi.String("string"),
    },
    RestCatalog: &.DestinationIcebergConfigurationCatalogConfigRestCatalogArgs{
    RestUri: pulumi.String("string"),
    CatalogType: pulumi.String("string"),
    RestCredential: pulumi.String("string"),
    RestToken: pulumi.String("string"),
    },
    },
    FormatConfig: &.DestinationIcebergConfigurationFormatConfigArgs{
    AutoCompact: pulumi.Bool(false),
    CompactTargetFileSizeInMb: pulumi.Float64(0),
    FlushBatchSize: pulumi.Float64(0),
    Format: pulumi.String("string"),
    },
    StorageConfig: &.DestinationIcebergConfigurationStorageConfigArgs{
    S3: &.DestinationIcebergConfigurationStorageConfigS3Args{
    AccessKeyId: pulumi.String("string"),
    S3WarehouseUri: pulumi.String("string"),
    SecretAccessKey: pulumi.String("string"),
    S3BucketRegion: pulumi.String("string"),
    S3Endpoint: pulumi.String("string"),
    S3PathStyleAccess: pulumi.Bool(false),
    StorageType: pulumi.String("string"),
    },
    ServerManaged: &.DestinationIcebergConfigurationStorageConfigServerManagedArgs{
    ManagedWarehouseName: pulumi.String("string"),
    StorageType: pulumi.String("string"),
    },
    },
    },
    WorkspaceId: pulumi.String("string"),
    DefinitionId: pulumi.String("string"),
    Name: pulumi.String("string"),
    })
    
    var destinationIcebergResource = new DestinationIceberg("destinationIcebergResource", DestinationIcebergArgs.builder()
        .configuration(DestinationIcebergConfigurationArgs.builder()
            .catalogConfig(DestinationIcebergConfigurationCatalogConfigArgs.builder()
                .glueCatalog(DestinationIcebergConfigurationCatalogConfigGlueCatalogArgs.builder()
                    .catalogType("string")
                    .database("string")
                    .build())
                .hadoopCatalogUseHierarchicalFileSystemsAsSameAsStorageConfig(DestinationIcebergConfigurationCatalogConfigHadoopCatalogUseHierarchicalFileSystemsAsSameAsStorageConfigArgs.builder()
                    .catalogType("string")
                    .database("string")
                    .build())
                .hiveCatalogUseApacheHiveMetaStore(DestinationIcebergConfigurationCatalogConfigHiveCatalogUseApacheHiveMetaStoreArgs.builder()
                    .hiveThriftUri("string")
                    .catalogType("string")
                    .database("string")
                    .build())
                .jdbcCatalogUseRelationalDatabase(DestinationIcebergConfigurationCatalogConfigJdbcCatalogUseRelationalDatabaseArgs.builder()
                    .catalogSchema("string")
                    .catalogType("string")
                    .database("string")
                    .jdbcUrl("string")
                    .password("string")
                    .ssl(false)
                    .username("string")
                    .build())
                .restCatalog(DestinationIcebergConfigurationCatalogConfigRestCatalogArgs.builder()
                    .restUri("string")
                    .catalogType("string")
                    .restCredential("string")
                    .restToken("string")
                    .build())
                .build())
            .formatConfig(DestinationIcebergConfigurationFormatConfigArgs.builder()
                .autoCompact(false)
                .compactTargetFileSizeInMb(0)
                .flushBatchSize(0)
                .format("string")
                .build())
            .storageConfig(DestinationIcebergConfigurationStorageConfigArgs.builder()
                .s3(DestinationIcebergConfigurationStorageConfigS3Args.builder()
                    .accessKeyId("string")
                    .s3WarehouseUri("string")
                    .secretAccessKey("string")
                    .s3BucketRegion("string")
                    .s3Endpoint("string")
                    .s3PathStyleAccess(false)
                    .storageType("string")
                    .build())
                .serverManaged(DestinationIcebergConfigurationStorageConfigServerManagedArgs.builder()
                    .managedWarehouseName("string")
                    .storageType("string")
                    .build())
                .build())
            .build())
        .workspaceId("string")
        .definitionId("string")
        .name("string")
        .build());
    
    destination_iceberg_resource = airbyte.DestinationIceberg("destinationIcebergResource",
        configuration={
            "catalog_config": {
                "glue_catalog": {
                    "catalog_type": "string",
                    "database": "string",
                },
                "hadoop_catalog_use_hierarchical_file_systems_as_same_as_storage_config": {
                    "catalog_type": "string",
                    "database": "string",
                },
                "hive_catalog_use_apache_hive_meta_store": {
                    "hive_thrift_uri": "string",
                    "catalog_type": "string",
                    "database": "string",
                },
                "jdbc_catalog_use_relational_database": {
                    "catalog_schema": "string",
                    "catalog_type": "string",
                    "database": "string",
                    "jdbc_url": "string",
                    "password": "string",
                    "ssl": False,
                    "username": "string",
                },
                "rest_catalog": {
                    "rest_uri": "string",
                    "catalog_type": "string",
                    "rest_credential": "string",
                    "rest_token": "string",
                },
            },
            "format_config": {
                "auto_compact": False,
                "compact_target_file_size_in_mb": 0,
                "flush_batch_size": 0,
                "format": "string",
            },
            "storage_config": {
                "s3": {
                    "access_key_id": "string",
                    "s3_warehouse_uri": "string",
                    "secret_access_key": "string",
                    "s3_bucket_region": "string",
                    "s3_endpoint": "string",
                    "s3_path_style_access": False,
                    "storage_type": "string",
                },
                "server_managed": {
                    "managed_warehouse_name": "string",
                    "storage_type": "string",
                },
            },
        },
        workspace_id="string",
        definition_id="string",
        name="string")
    
    const destinationIcebergResource = new airbyte.DestinationIceberg("destinationIcebergResource", {
        configuration: {
            catalogConfig: {
                glueCatalog: {
                    catalogType: "string",
                    database: "string",
                },
                hadoopCatalogUseHierarchicalFileSystemsAsSameAsStorageConfig: {
                    catalogType: "string",
                    database: "string",
                },
                hiveCatalogUseApacheHiveMetaStore: {
                    hiveThriftUri: "string",
                    catalogType: "string",
                    database: "string",
                },
                jdbcCatalogUseRelationalDatabase: {
                    catalogSchema: "string",
                    catalogType: "string",
                    database: "string",
                    jdbcUrl: "string",
                    password: "string",
                    ssl: false,
                    username: "string",
                },
                restCatalog: {
                    restUri: "string",
                    catalogType: "string",
                    restCredential: "string",
                    restToken: "string",
                },
            },
            formatConfig: {
                autoCompact: false,
                compactTargetFileSizeInMb: 0,
                flushBatchSize: 0,
                format: "string",
            },
            storageConfig: {
                s3: {
                    accessKeyId: "string",
                    s3WarehouseUri: "string",
                    secretAccessKey: "string",
                    s3BucketRegion: "string",
                    s3Endpoint: "string",
                    s3PathStyleAccess: false,
                    storageType: "string",
                },
                serverManaged: {
                    managedWarehouseName: "string",
                    storageType: "string",
                },
            },
        },
        workspaceId: "string",
        definitionId: "string",
        name: "string",
    });
    
    type: airbyte:DestinationIceberg
    properties:
        configuration:
            catalogConfig:
                glueCatalog:
                    catalogType: string
                    database: string
                hadoopCatalogUseHierarchicalFileSystemsAsSameAsStorageConfig:
                    catalogType: string
                    database: string
                hiveCatalogUseApacheHiveMetaStore:
                    catalogType: string
                    database: string
                    hiveThriftUri: string
                jdbcCatalogUseRelationalDatabase:
                    catalogSchema: string
                    catalogType: string
                    database: string
                    jdbcUrl: string
                    password: string
                    ssl: false
                    username: string
                restCatalog:
                    catalogType: string
                    restCredential: string
                    restToken: string
                    restUri: string
            formatConfig:
                autoCompact: false
                compactTargetFileSizeInMb: 0
                flushBatchSize: 0
                format: string
            storageConfig:
                s3:
                    accessKeyId: string
                    s3BucketRegion: string
                    s3Endpoint: string
                    s3PathStyleAccess: false
                    s3WarehouseUri: string
                    secretAccessKey: string
                    storageType: string
                serverManaged:
                    managedWarehouseName: string
                    storageType: string
        definitionId: string
        name: string
        workspaceId: string
    

    DestinationIceberg Resource Properties

    To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.

    Inputs

    In Python, inputs that are objects can be passed either as argument classes or as dictionary literals.

    The DestinationIceberg resource accepts the following input properties:

    Configuration DestinationIcebergConfiguration
    WorkspaceId string
    DefinitionId string
    The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided. Requires replacement if changed.
    Name string
    Name of the destination e.g. dev-mysql-instance.
    Configuration DestinationIcebergConfigurationArgs
    WorkspaceId string
    DefinitionId string
    The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided. Requires replacement if changed.
    Name string
    Name of the destination e.g. dev-mysql-instance.
    configuration DestinationIcebergConfiguration
    workspaceId String
    definitionId String
    The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided. Requires replacement if changed.
    name String
    Name of the destination e.g. dev-mysql-instance.
    configuration DestinationIcebergConfiguration
    workspaceId string
    definitionId string
    The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided. Requires replacement if changed.
    name string
    Name of the destination e.g. dev-mysql-instance.
    configuration DestinationIcebergConfigurationArgs
    workspace_id str
    definition_id str
    The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided. Requires replacement if changed.
    name str
    Name of the destination e.g. dev-mysql-instance.
    configuration Property Map
    workspaceId String
    definitionId String
    The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided. Requires replacement if changed.
    name String
    Name of the destination e.g. dev-mysql-instance.

    Outputs

    All input properties are implicitly available as output properties. Additionally, the DestinationIceberg resource produces the following output properties:

    CreatedAt double
    DestinationId string
    DestinationType string
    Id string
    The provider-assigned unique ID for this managed resource.
    ResourceAllocation DestinationIcebergResourceAllocation
    actor or actor definition specific resource requirements. if default is set, these are the requirements that should be set for ALL jobs run for this actor definition. it is overriden by the job type specific configurations. if not set, the platform will use defaults. these values will be overriden by configuration at the connection level.
    CreatedAt float64
    DestinationId string
    DestinationType string
    Id string
    The provider-assigned unique ID for this managed resource.
    ResourceAllocation DestinationIcebergResourceAllocation
    actor or actor definition specific resource requirements. if default is set, these are the requirements that should be set for ALL jobs run for this actor definition. it is overriden by the job type specific configurations. if not set, the platform will use defaults. these values will be overriden by configuration at the connection level.
    createdAt Double
    destinationId String
    destinationType String
    id String
    The provider-assigned unique ID for this managed resource.
    resourceAllocation DestinationIcebergResourceAllocation
    actor or actor definition specific resource requirements. if default is set, these are the requirements that should be set for ALL jobs run for this actor definition. it is overriden by the job type specific configurations. if not set, the platform will use defaults. these values will be overriden by configuration at the connection level.
    createdAt number
    destinationId string
    destinationType string
    id string
    The provider-assigned unique ID for this managed resource.
    resourceAllocation DestinationIcebergResourceAllocation
    actor or actor definition specific resource requirements. if default is set, these are the requirements that should be set for ALL jobs run for this actor definition. it is overriden by the job type specific configurations. if not set, the platform will use defaults. these values will be overriden by configuration at the connection level.
    created_at float
    destination_id str
    destination_type str
    id str
    The provider-assigned unique ID for this managed resource.
    resource_allocation DestinationIcebergResourceAllocation
    actor or actor definition specific resource requirements. if default is set, these are the requirements that should be set for ALL jobs run for this actor definition. it is overriden by the job type specific configurations. if not set, the platform will use defaults. these values will be overriden by configuration at the connection level.
    createdAt Number
    destinationId String
    destinationType String
    id String
    The provider-assigned unique ID for this managed resource.
    resourceAllocation Property Map
    actor or actor definition specific resource requirements. if default is set, these are the requirements that should be set for ALL jobs run for this actor definition. it is overriden by the job type specific configurations. if not set, the platform will use defaults. these values will be overriden by configuration at the connection level.

    Look up Existing DestinationIceberg Resource

    Get an existing DestinationIceberg resource’s state with the given name, ID, and optional extra properties used to qualify the lookup.

    public static get(name: string, id: Input<ID>, state?: DestinationIcebergState, opts?: CustomResourceOptions): DestinationIceberg
    @staticmethod
    def get(resource_name: str,
            id: str,
            opts: Optional[ResourceOptions] = None,
            configuration: Optional[DestinationIcebergConfigurationArgs] = None,
            created_at: Optional[float] = None,
            definition_id: Optional[str] = None,
            destination_id: Optional[str] = None,
            destination_type: Optional[str] = None,
            name: Optional[str] = None,
            resource_allocation: Optional[DestinationIcebergResourceAllocationArgs] = None,
            workspace_id: Optional[str] = None) -> DestinationIceberg
    func GetDestinationIceberg(ctx *Context, name string, id IDInput, state *DestinationIcebergState, opts ...ResourceOption) (*DestinationIceberg, error)
    public static DestinationIceberg Get(string name, Input<string> id, DestinationIcebergState? state, CustomResourceOptions? opts = null)
    public static DestinationIceberg get(String name, Output<String> id, DestinationIcebergState state, CustomResourceOptions options)
    resources:  _:    type: airbyte:DestinationIceberg    get:      id: ${id}
    name
    The unique name of the resulting resource.
    id
    The unique provider ID of the resource to lookup.
    state
    Any extra arguments used during the lookup.
    opts
    A bag of options that control this resource's behavior.
    resource_name
    The unique name of the resulting resource.
    id
    The unique provider ID of the resource to lookup.
    name
    The unique name of the resulting resource.
    id
    The unique provider ID of the resource to lookup.
    state
    Any extra arguments used during the lookup.
    opts
    A bag of options that control this resource's behavior.
    name
    The unique name of the resulting resource.
    id
    The unique provider ID of the resource to lookup.
    state
    Any extra arguments used during the lookup.
    opts
    A bag of options that control this resource's behavior.
    name
    The unique name of the resulting resource.
    id
    The unique provider ID of the resource to lookup.
    state
    Any extra arguments used during the lookup.
    opts
    A bag of options that control this resource's behavior.
    The following state arguments are supported:
    Configuration DestinationIcebergConfiguration
    CreatedAt double
    DefinitionId string
    The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided. Requires replacement if changed.
    DestinationId string
    DestinationType string
    Name string
    Name of the destination e.g. dev-mysql-instance.
    ResourceAllocation DestinationIcebergResourceAllocation
    actor or actor definition specific resource requirements. if default is set, these are the requirements that should be set for ALL jobs run for this actor definition. it is overriden by the job type specific configurations. if not set, the platform will use defaults. these values will be overriden by configuration at the connection level.
    WorkspaceId string
    Configuration DestinationIcebergConfigurationArgs
    CreatedAt float64
    DefinitionId string
    The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided. Requires replacement if changed.
    DestinationId string
    DestinationType string
    Name string
    Name of the destination e.g. dev-mysql-instance.
    ResourceAllocation DestinationIcebergResourceAllocationArgs
    actor or actor definition specific resource requirements. if default is set, these are the requirements that should be set for ALL jobs run for this actor definition. it is overriden by the job type specific configurations. if not set, the platform will use defaults. these values will be overriden by configuration at the connection level.
    WorkspaceId string
    configuration DestinationIcebergConfiguration
    createdAt Double
    definitionId String
    The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided. Requires replacement if changed.
    destinationId String
    destinationType String
    name String
    Name of the destination e.g. dev-mysql-instance.
    resourceAllocation DestinationIcebergResourceAllocation
    actor or actor definition specific resource requirements. if default is set, these are the requirements that should be set for ALL jobs run for this actor definition. it is overriden by the job type specific configurations. if not set, the platform will use defaults. these values will be overriden by configuration at the connection level.
    workspaceId String
    configuration DestinationIcebergConfiguration
    createdAt number
    definitionId string
    The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided. Requires replacement if changed.
    destinationId string
    destinationType string
    name string
    Name of the destination e.g. dev-mysql-instance.
    resourceAllocation DestinationIcebergResourceAllocation
    actor or actor definition specific resource requirements. if default is set, these are the requirements that should be set for ALL jobs run for this actor definition. it is overriden by the job type specific configurations. if not set, the platform will use defaults. these values will be overriden by configuration at the connection level.
    workspaceId string
    configuration DestinationIcebergConfigurationArgs
    created_at float
    definition_id str
    The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided. Requires replacement if changed.
    destination_id str
    destination_type str
    name str
    Name of the destination e.g. dev-mysql-instance.
    resource_allocation DestinationIcebergResourceAllocationArgs
    actor or actor definition specific resource requirements. if default is set, these are the requirements that should be set for ALL jobs run for this actor definition. it is overriden by the job type specific configurations. if not set, the platform will use defaults. these values will be overriden by configuration at the connection level.
    workspace_id str
    configuration Property Map
    createdAt Number
    definitionId String
    The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided. Requires replacement if changed.
    destinationId String
    destinationType String
    name String
    Name of the destination e.g. dev-mysql-instance.
    resourceAllocation Property Map
    actor or actor definition specific resource requirements. if default is set, these are the requirements that should be set for ALL jobs run for this actor definition. it is overriden by the job type specific configurations. if not set, the platform will use defaults. these values will be overriden by configuration at the connection level.
    workspaceId String

    Supporting Types

    DestinationIcebergConfiguration, DestinationIcebergConfigurationArgs

    catalogConfig Property Map
    Catalog config of Iceberg.
    formatConfig Property Map
    File format of Iceberg storage.
    storageConfig Property Map
    Storage config of Iceberg.

    DestinationIcebergConfigurationCatalogConfig, DestinationIcebergConfigurationCatalogConfigArgs

    GlueCatalog DestinationIcebergConfigurationCatalogConfigGlueCatalog
    The GlueCatalog connects to a AWS Glue Catalog
    HadoopCatalogUseHierarchicalFileSystemsAsSameAsStorageConfig DestinationIcebergConfigurationCatalogConfigHadoopCatalogUseHierarchicalFileSystemsAsSameAsStorageConfig
    A Hadoop catalog doesn’t need to connect to a Hive MetaStore, but can only be used with HDFS or similar file systems that support atomic rename.
    HiveCatalogUseApacheHiveMetaStore DestinationIcebergConfigurationCatalogConfigHiveCatalogUseApacheHiveMetaStore
    JdbcCatalogUseRelationalDatabase DestinationIcebergConfigurationCatalogConfigJdbcCatalogUseRelationalDatabase
    Using a table in a relational database to manage Iceberg tables through JDBC. Read more \n\nhere\n\n. Supporting: PostgreSQL
    RestCatalog DestinationIcebergConfigurationCatalogConfigRestCatalog
    The RESTCatalog connects to a REST server at the specified URI
    GlueCatalog DestinationIcebergConfigurationCatalogConfigGlueCatalog
    The GlueCatalog connects to a AWS Glue Catalog
    HadoopCatalogUseHierarchicalFileSystemsAsSameAsStorageConfig DestinationIcebergConfigurationCatalogConfigHadoopCatalogUseHierarchicalFileSystemsAsSameAsStorageConfig
    A Hadoop catalog doesn’t need to connect to a Hive MetaStore, but can only be used with HDFS or similar file systems that support atomic rename.
    HiveCatalogUseApacheHiveMetaStore DestinationIcebergConfigurationCatalogConfigHiveCatalogUseApacheHiveMetaStore
    JdbcCatalogUseRelationalDatabase DestinationIcebergConfigurationCatalogConfigJdbcCatalogUseRelationalDatabase
    Using a table in a relational database to manage Iceberg tables through JDBC. Read more \n\nhere\n\n. Supporting: PostgreSQL
    RestCatalog DestinationIcebergConfigurationCatalogConfigRestCatalog
    The RESTCatalog connects to a REST server at the specified URI
    glueCatalog DestinationIcebergConfigurationCatalogConfigGlueCatalog
    The GlueCatalog connects to a AWS Glue Catalog
    hadoopCatalogUseHierarchicalFileSystemsAsSameAsStorageConfig DestinationIcebergConfigurationCatalogConfigHadoopCatalogUseHierarchicalFileSystemsAsSameAsStorageConfig
    A Hadoop catalog doesn’t need to connect to a Hive MetaStore, but can only be used with HDFS or similar file systems that support atomic rename.
    hiveCatalogUseApacheHiveMetaStore DestinationIcebergConfigurationCatalogConfigHiveCatalogUseApacheHiveMetaStore
    jdbcCatalogUseRelationalDatabase DestinationIcebergConfigurationCatalogConfigJdbcCatalogUseRelationalDatabase
    Using a table in a relational database to manage Iceberg tables through JDBC. Read more \n\nhere\n\n. Supporting: PostgreSQL
    restCatalog DestinationIcebergConfigurationCatalogConfigRestCatalog
    The RESTCatalog connects to a REST server at the specified URI
    glueCatalog DestinationIcebergConfigurationCatalogConfigGlueCatalog
    The GlueCatalog connects to a AWS Glue Catalog
    hadoopCatalogUseHierarchicalFileSystemsAsSameAsStorageConfig DestinationIcebergConfigurationCatalogConfigHadoopCatalogUseHierarchicalFileSystemsAsSameAsStorageConfig
    A Hadoop catalog doesn’t need to connect to a Hive MetaStore, but can only be used with HDFS or similar file systems that support atomic rename.
    hiveCatalogUseApacheHiveMetaStore DestinationIcebergConfigurationCatalogConfigHiveCatalogUseApacheHiveMetaStore
    jdbcCatalogUseRelationalDatabase DestinationIcebergConfigurationCatalogConfigJdbcCatalogUseRelationalDatabase
    Using a table in a relational database to manage Iceberg tables through JDBC. Read more \n\nhere\n\n. Supporting: PostgreSQL
    restCatalog DestinationIcebergConfigurationCatalogConfigRestCatalog
    The RESTCatalog connects to a REST server at the specified URI
    glue_catalog DestinationIcebergConfigurationCatalogConfigGlueCatalog
    The GlueCatalog connects to a AWS Glue Catalog
    hadoop_catalog_use_hierarchical_file_systems_as_same_as_storage_config DestinationIcebergConfigurationCatalogConfigHadoopCatalogUseHierarchicalFileSystemsAsSameAsStorageConfig
    A Hadoop catalog doesn’t need to connect to a Hive MetaStore, but can only be used with HDFS or similar file systems that support atomic rename.
    hive_catalog_use_apache_hive_meta_store DestinationIcebergConfigurationCatalogConfigHiveCatalogUseApacheHiveMetaStore
    jdbc_catalog_use_relational_database DestinationIcebergConfigurationCatalogConfigJdbcCatalogUseRelationalDatabase
    Using a table in a relational database to manage Iceberg tables through JDBC. Read more \n\nhere\n\n. Supporting: PostgreSQL
    rest_catalog DestinationIcebergConfigurationCatalogConfigRestCatalog
    The RESTCatalog connects to a REST server at the specified URI
    glueCatalog Property Map
    The GlueCatalog connects to a AWS Glue Catalog
    hadoopCatalogUseHierarchicalFileSystemsAsSameAsStorageConfig Property Map
    A Hadoop catalog doesn’t need to connect to a Hive MetaStore, but can only be used with HDFS or similar file systems that support atomic rename.
    hiveCatalogUseApacheHiveMetaStore Property Map
    jdbcCatalogUseRelationalDatabase Property Map
    Using a table in a relational database to manage Iceberg tables through JDBC. Read more \n\nhere\n\n. Supporting: PostgreSQL
    restCatalog Property Map
    The RESTCatalog connects to a REST server at the specified URI

    DestinationIcebergConfigurationCatalogConfigGlueCatalog, DestinationIcebergConfigurationCatalogConfigGlueCatalogArgs

    CatalogType string
    Default: "Glue"; must be "Glue"
    Database string
    The default schema tables are written to if the source does not specify a namespace. The usual value for this field is "public". Default: "public"
    CatalogType string
    Default: "Glue"; must be "Glue"
    Database string
    The default schema tables are written to if the source does not specify a namespace. The usual value for this field is "public". Default: "public"
    catalogType String
    Default: "Glue"; must be "Glue"
    database String
    The default schema tables are written to if the source does not specify a namespace. The usual value for this field is "public". Default: "public"
    catalogType string
    Default: "Glue"; must be "Glue"
    database string
    The default schema tables are written to if the source does not specify a namespace. The usual value for this field is "public". Default: "public"
    catalog_type str
    Default: "Glue"; must be "Glue"
    database str
    The default schema tables are written to if the source does not specify a namespace. The usual value for this field is "public". Default: "public"
    catalogType String
    Default: "Glue"; must be "Glue"
    database String
    The default schema tables are written to if the source does not specify a namespace. The usual value for this field is "public". Default: "public"

    DestinationIcebergConfigurationCatalogConfigHadoopCatalogUseHierarchicalFileSystemsAsSameAsStorageConfig, DestinationIcebergConfigurationCatalogConfigHadoopCatalogUseHierarchicalFileSystemsAsSameAsStorageConfigArgs

    CatalogType string
    Default: "Hadoop"; must be "Hadoop"
    Database string
    The default database tables are written to if the source does not specify a namespace. The usual value for this field is "default". Default: "default"
    CatalogType string
    Default: "Hadoop"; must be "Hadoop"
    Database string
    The default database tables are written to if the source does not specify a namespace. The usual value for this field is "default". Default: "default"
    catalogType String
    Default: "Hadoop"; must be "Hadoop"
    database String
    The default database tables are written to if the source does not specify a namespace. The usual value for this field is "default". Default: "default"
    catalogType string
    Default: "Hadoop"; must be "Hadoop"
    database string
    The default database tables are written to if the source does not specify a namespace. The usual value for this field is "default". Default: "default"
    catalog_type str
    Default: "Hadoop"; must be "Hadoop"
    database str
    The default database tables are written to if the source does not specify a namespace. The usual value for this field is "default". Default: "default"
    catalogType String
    Default: "Hadoop"; must be "Hadoop"
    database String
    The default database tables are written to if the source does not specify a namespace. The usual value for this field is "default". Default: "default"

    DestinationIcebergConfigurationCatalogConfigHiveCatalogUseApacheHiveMetaStore, DestinationIcebergConfigurationCatalogConfigHiveCatalogUseApacheHiveMetaStoreArgs

    HiveThriftUri string
    Hive MetaStore thrift server uri of iceberg catalog.
    CatalogType string
    Default: "Hive"; must be "Hive"
    Database string
    The default database tables are written to if the source does not specify a namespace. The usual value for this field is "default". Default: "default"
    HiveThriftUri string
    Hive MetaStore thrift server uri of iceberg catalog.
    CatalogType string
    Default: "Hive"; must be "Hive"
    Database string
    The default database tables are written to if the source does not specify a namespace. The usual value for this field is "default". Default: "default"
    hiveThriftUri String
    Hive MetaStore thrift server uri of iceberg catalog.
    catalogType String
    Default: "Hive"; must be "Hive"
    database String
    The default database tables are written to if the source does not specify a namespace. The usual value for this field is "default". Default: "default"
    hiveThriftUri string
    Hive MetaStore thrift server uri of iceberg catalog.
    catalogType string
    Default: "Hive"; must be "Hive"
    database string
    The default database tables are written to if the source does not specify a namespace. The usual value for this field is "default". Default: "default"
    hive_thrift_uri str
    Hive MetaStore thrift server uri of iceberg catalog.
    catalog_type str
    Default: "Hive"; must be "Hive"
    database str
    The default database tables are written to if the source does not specify a namespace. The usual value for this field is "default". Default: "default"
    hiveThriftUri String
    Hive MetaStore thrift server uri of iceberg catalog.
    catalogType String
    Default: "Hive"; must be "Hive"
    database String
    The default database tables are written to if the source does not specify a namespace. The usual value for this field is "default". Default: "default"

    DestinationIcebergConfigurationCatalogConfigJdbcCatalogUseRelationalDatabase, DestinationIcebergConfigurationCatalogConfigJdbcCatalogUseRelationalDatabaseArgs

    CatalogSchema string
    Iceberg catalog metadata tables are written to catalog schema. The usual value for this field is "public". Default: "public"
    CatalogType string
    Default: "Jdbc"; must be "Jdbc"
    Database string
    The default schema tables are written to if the source does not specify a namespace. The usual value for this field is "public". Default: "public"
    JdbcUrl string
    Password string
    Password associated with the username.
    Ssl bool
    Encrypt data using SSL. When activating SSL, please select one of the connection modes. Default: false
    Username string
    Username to use to access the database.
    CatalogSchema string
    Iceberg catalog metadata tables are written to catalog schema. The usual value for this field is "public". Default: "public"
    CatalogType string
    Default: "Jdbc"; must be "Jdbc"
    Database string
    The default schema tables are written to if the source does not specify a namespace. The usual value for this field is "public". Default: "public"
    JdbcUrl string
    Password string
    Password associated with the username.
    Ssl bool
    Encrypt data using SSL. When activating SSL, please select one of the connection modes. Default: false
    Username string
    Username to use to access the database.
    catalogSchema String
    Iceberg catalog metadata tables are written to catalog schema. The usual value for this field is "public". Default: "public"
    catalogType String
    Default: "Jdbc"; must be "Jdbc"
    database String
    The default schema tables are written to if the source does not specify a namespace. The usual value for this field is "public". Default: "public"
    jdbcUrl String
    password String
    Password associated with the username.
    ssl Boolean
    Encrypt data using SSL. When activating SSL, please select one of the connection modes. Default: false
    username String
    Username to use to access the database.
    catalogSchema string
    Iceberg catalog metadata tables are written to catalog schema. The usual value for this field is "public". Default: "public"
    catalogType string
    Default: "Jdbc"; must be "Jdbc"
    database string
    The default schema tables are written to if the source does not specify a namespace. The usual value for this field is "public". Default: "public"
    jdbcUrl string
    password string
    Password associated with the username.
    ssl boolean
    Encrypt data using SSL. When activating SSL, please select one of the connection modes. Default: false
    username string
    Username to use to access the database.
    catalog_schema str
    Iceberg catalog metadata tables are written to catalog schema. The usual value for this field is "public". Default: "public"
    catalog_type str
    Default: "Jdbc"; must be "Jdbc"
    database str
    The default schema tables are written to if the source does not specify a namespace. The usual value for this field is "public". Default: "public"
    jdbc_url str
    password str
    Password associated with the username.
    ssl bool
    Encrypt data using SSL. When activating SSL, please select one of the connection modes. Default: false
    username str
    Username to use to access the database.
    catalogSchema String
    Iceberg catalog metadata tables are written to catalog schema. The usual value for this field is "public". Default: "public"
    catalogType String
    Default: "Jdbc"; must be "Jdbc"
    database String
    The default schema tables are written to if the source does not specify a namespace. The usual value for this field is "public". Default: "public"
    jdbcUrl String
    password String
    Password associated with the username.
    ssl Boolean
    Encrypt data using SSL. When activating SSL, please select one of the connection modes. Default: false
    username String
    Username to use to access the database.

    DestinationIcebergConfigurationCatalogConfigRestCatalog, DestinationIcebergConfigurationCatalogConfigRestCatalogArgs

    RestUri string
    CatalogType string
    Default: "Rest"; must be "Rest"
    RestCredential string
    RestToken string
    RestUri string
    CatalogType string
    Default: "Rest"; must be "Rest"
    RestCredential string
    RestToken string
    restUri String
    catalogType String
    Default: "Rest"; must be "Rest"
    restCredential String
    restToken String
    restUri string
    catalogType string
    Default: "Rest"; must be "Rest"
    restCredential string
    restToken string
    rest_uri str
    catalog_type str
    Default: "Rest"; must be "Rest"
    rest_credential str
    rest_token str
    restUri String
    catalogType String
    Default: "Rest"; must be "Rest"
    restCredential String
    restToken String

    DestinationIcebergConfigurationFormatConfig, DestinationIcebergConfigurationFormatConfigArgs

    AutoCompact bool
    Auto compact data files when stream close. Default: false
    CompactTargetFileSizeInMb double
    Specify the target size of Iceberg data file when performing a compaction action. Default: 100
    FlushBatchSize double
    Iceberg data file flush batch size. Incoming rows write to cache firstly; When cache size reaches this 'batch size', flush into real Iceberg data file. Default: 10000
    Format string
    Default: "Parquet"; must be one of ["Parquet", "Avro"]
    AutoCompact bool
    Auto compact data files when stream close. Default: false
    CompactTargetFileSizeInMb float64
    Specify the target size of Iceberg data file when performing a compaction action. Default: 100
    FlushBatchSize float64
    Iceberg data file flush batch size. Incoming rows write to cache firstly; When cache size reaches this 'batch size', flush into real Iceberg data file. Default: 10000
    Format string
    Default: "Parquet"; must be one of ["Parquet", "Avro"]
    autoCompact Boolean
    Auto compact data files when stream close. Default: false
    compactTargetFileSizeInMb Double
    Specify the target size of Iceberg data file when performing a compaction action. Default: 100
    flushBatchSize Double
    Iceberg data file flush batch size. Incoming rows write to cache firstly; When cache size reaches this 'batch size', flush into real Iceberg data file. Default: 10000
    format String
    Default: "Parquet"; must be one of ["Parquet", "Avro"]
    autoCompact boolean
    Auto compact data files when stream close. Default: false
    compactTargetFileSizeInMb number
    Specify the target size of Iceberg data file when performing a compaction action. Default: 100
    flushBatchSize number
    Iceberg data file flush batch size. Incoming rows write to cache firstly; When cache size reaches this 'batch size', flush into real Iceberg data file. Default: 10000
    format string
    Default: "Parquet"; must be one of ["Parquet", "Avro"]
    auto_compact bool
    Auto compact data files when stream close. Default: false
    compact_target_file_size_in_mb float
    Specify the target size of Iceberg data file when performing a compaction action. Default: 100
    flush_batch_size float
    Iceberg data file flush batch size. Incoming rows write to cache firstly; When cache size reaches this 'batch size', flush into real Iceberg data file. Default: 10000
    format str
    Default: "Parquet"; must be one of ["Parquet", "Avro"]
    autoCompact Boolean
    Auto compact data files when stream close. Default: false
    compactTargetFileSizeInMb Number
    Specify the target size of Iceberg data file when performing a compaction action. Default: 100
    flushBatchSize Number
    Iceberg data file flush batch size. Incoming rows write to cache firstly; When cache size reaches this 'batch size', flush into real Iceberg data file. Default: 10000
    format String
    Default: "Parquet"; must be one of ["Parquet", "Avro"]

    DestinationIcebergConfigurationStorageConfig, DestinationIcebergConfigurationStorageConfigArgs

    s3 Property Map
    S3 object storage
    serverManaged Property Map
    Server-managed object storage

    DestinationIcebergConfigurationStorageConfigS3, DestinationIcebergConfigurationStorageConfigS3Args

    AccessKeyId string
    The access key ID to access the S3 bucket. Airbyte requires Read and Write permissions to the given bucket. Read more \n\nhere\n\n.
    S3WarehouseUri string
    The Warehouse Uri for Iceberg
    SecretAccessKey string
    The corresponding secret to the access key ID. Read more \n\nhere\n\n
    S3BucketRegion string
    The region of the S3 bucket. See \n\nhere\n\n for all region codes. Default: ""; must be one of ["", "af-south-1", "ap-east-1", "ap-northeast-1", "ap-northeast-2", "ap-northeast-3", "ap-south-1", "ap-south-2", "ap-southeast-1", "ap-southeast-2", "ap-southeast-3", "ap-southeast-4", "ca-central-1", "ca-west-1", "cn-north-1", "cn-northwest-1", "eu-central-1", "eu-central-2", "eu-north-1", "eu-south-1", "eu-south-2", "eu-west-1", "eu-west-2", "eu-west-3", "il-central-1", "me-central-1", "me-south-1", "sa-east-1", "us-east-1", "us-east-2", "us-gov-east-1", "us-gov-west-1", "us-west-1", "us-west-2"]
    S3Endpoint string
    Your S3 endpoint url. Read more \n\nhere\n\n. Default: ""
    S3PathStyleAccess bool
    Use path style access. Default: true
    StorageType string
    Default: "S3"; must be "S3"
    AccessKeyId string
    The access key ID to access the S3 bucket. Airbyte requires Read and Write permissions to the given bucket. Read more \n\nhere\n\n.
    S3WarehouseUri string
    The Warehouse Uri for Iceberg
    SecretAccessKey string
    The corresponding secret to the access key ID. Read more \n\nhere\n\n
    S3BucketRegion string
    The region of the S3 bucket. See \n\nhere\n\n for all region codes. Default: ""; must be one of ["", "af-south-1", "ap-east-1", "ap-northeast-1", "ap-northeast-2", "ap-northeast-3", "ap-south-1", "ap-south-2", "ap-southeast-1", "ap-southeast-2", "ap-southeast-3", "ap-southeast-4", "ca-central-1", "ca-west-1", "cn-north-1", "cn-northwest-1", "eu-central-1", "eu-central-2", "eu-north-1", "eu-south-1", "eu-south-2", "eu-west-1", "eu-west-2", "eu-west-3", "il-central-1", "me-central-1", "me-south-1", "sa-east-1", "us-east-1", "us-east-2", "us-gov-east-1", "us-gov-west-1", "us-west-1", "us-west-2"]
    S3Endpoint string
    Your S3 endpoint url. Read more \n\nhere\n\n. Default: ""
    S3PathStyleAccess bool
    Use path style access. Default: true
    StorageType string
    Default: "S3"; must be "S3"
    accessKeyId String
    The access key ID to access the S3 bucket. Airbyte requires Read and Write permissions to the given bucket. Read more \n\nhere\n\n.
    s3WarehouseUri String
    The Warehouse Uri for Iceberg
    secretAccessKey String
    The corresponding secret to the access key ID. Read more \n\nhere\n\n
    s3BucketRegion String
    The region of the S3 bucket. See \n\nhere\n\n for all region codes. Default: ""; must be one of ["", "af-south-1", "ap-east-1", "ap-northeast-1", "ap-northeast-2", "ap-northeast-3", "ap-south-1", "ap-south-2", "ap-southeast-1", "ap-southeast-2", "ap-southeast-3", "ap-southeast-4", "ca-central-1", "ca-west-1", "cn-north-1", "cn-northwest-1", "eu-central-1", "eu-central-2", "eu-north-1", "eu-south-1", "eu-south-2", "eu-west-1", "eu-west-2", "eu-west-3", "il-central-1", "me-central-1", "me-south-1", "sa-east-1", "us-east-1", "us-east-2", "us-gov-east-1", "us-gov-west-1", "us-west-1", "us-west-2"]
    s3Endpoint String
    Your S3 endpoint url. Read more \n\nhere\n\n. Default: ""
    s3PathStyleAccess Boolean
    Use path style access. Default: true
    storageType String
    Default: "S3"; must be "S3"
    accessKeyId string
    The access key ID to access the S3 bucket. Airbyte requires Read and Write permissions to the given bucket. Read more \n\nhere\n\n.
    s3WarehouseUri string
    The Warehouse Uri for Iceberg
    secretAccessKey string
    The corresponding secret to the access key ID. Read more \n\nhere\n\n
    s3BucketRegion string
    The region of the S3 bucket. See \n\nhere\n\n for all region codes. Default: ""; must be one of ["", "af-south-1", "ap-east-1", "ap-northeast-1", "ap-northeast-2", "ap-northeast-3", "ap-south-1", "ap-south-2", "ap-southeast-1", "ap-southeast-2", "ap-southeast-3", "ap-southeast-4", "ca-central-1", "ca-west-1", "cn-north-1", "cn-northwest-1", "eu-central-1", "eu-central-2", "eu-north-1", "eu-south-1", "eu-south-2", "eu-west-1", "eu-west-2", "eu-west-3", "il-central-1", "me-central-1", "me-south-1", "sa-east-1", "us-east-1", "us-east-2", "us-gov-east-1", "us-gov-west-1", "us-west-1", "us-west-2"]
    s3Endpoint string
    Your S3 endpoint url. Read more \n\nhere\n\n. Default: ""
    s3PathStyleAccess boolean
    Use path style access. Default: true
    storageType string
    Default: "S3"; must be "S3"
    access_key_id str
    The access key ID to access the S3 bucket. Airbyte requires Read and Write permissions to the given bucket. Read more \n\nhere\n\n.
    s3_warehouse_uri str
    The Warehouse Uri for Iceberg
    secret_access_key str
    The corresponding secret to the access key ID. Read more \n\nhere\n\n
    s3_bucket_region str
    The region of the S3 bucket. See \n\nhere\n\n for all region codes. Default: ""; must be one of ["", "af-south-1", "ap-east-1", "ap-northeast-1", "ap-northeast-2", "ap-northeast-3", "ap-south-1", "ap-south-2", "ap-southeast-1", "ap-southeast-2", "ap-southeast-3", "ap-southeast-4", "ca-central-1", "ca-west-1", "cn-north-1", "cn-northwest-1", "eu-central-1", "eu-central-2", "eu-north-1", "eu-south-1", "eu-south-2", "eu-west-1", "eu-west-2", "eu-west-3", "il-central-1", "me-central-1", "me-south-1", "sa-east-1", "us-east-1", "us-east-2", "us-gov-east-1", "us-gov-west-1", "us-west-1", "us-west-2"]
    s3_endpoint str
    Your S3 endpoint url. Read more \n\nhere\n\n. Default: ""
    s3_path_style_access bool
    Use path style access. Default: true
    storage_type str
    Default: "S3"; must be "S3"
    accessKeyId String
    The access key ID to access the S3 bucket. Airbyte requires Read and Write permissions to the given bucket. Read more \n\nhere\n\n.
    s3WarehouseUri String
    The Warehouse Uri for Iceberg
    secretAccessKey String
    The corresponding secret to the access key ID. Read more \n\nhere\n\n
    s3BucketRegion String
    The region of the S3 bucket. See \n\nhere\n\n for all region codes. Default: ""; must be one of ["", "af-south-1", "ap-east-1", "ap-northeast-1", "ap-northeast-2", "ap-northeast-3", "ap-south-1", "ap-south-2", "ap-southeast-1", "ap-southeast-2", "ap-southeast-3", "ap-southeast-4", "ca-central-1", "ca-west-1", "cn-north-1", "cn-northwest-1", "eu-central-1", "eu-central-2", "eu-north-1", "eu-south-1", "eu-south-2", "eu-west-1", "eu-west-2", "eu-west-3", "il-central-1", "me-central-1", "me-south-1", "sa-east-1", "us-east-1", "us-east-2", "us-gov-east-1", "us-gov-west-1", "us-west-1", "us-west-2"]
    s3Endpoint String
    Your S3 endpoint url. Read more \n\nhere\n\n. Default: ""
    s3PathStyleAccess Boolean
    Use path style access. Default: true
    storageType String
    Default: "S3"; must be "S3"

    DestinationIcebergConfigurationStorageConfigServerManaged, DestinationIcebergConfigurationStorageConfigServerManagedArgs

    ManagedWarehouseName string
    The name of the managed warehouse
    StorageType string
    Default: "MANAGED"; must be "MANAGED"
    ManagedWarehouseName string
    The name of the managed warehouse
    StorageType string
    Default: "MANAGED"; must be "MANAGED"
    managedWarehouseName String
    The name of the managed warehouse
    storageType String
    Default: "MANAGED"; must be "MANAGED"
    managedWarehouseName string
    The name of the managed warehouse
    storageType string
    Default: "MANAGED"; must be "MANAGED"
    managed_warehouse_name str
    The name of the managed warehouse
    storage_type str
    Default: "MANAGED"; must be "MANAGED"
    managedWarehouseName String
    The name of the managed warehouse
    storageType String
    Default: "MANAGED"; must be "MANAGED"

    DestinationIcebergResourceAllocation, DestinationIcebergResourceAllocationArgs

    Default DestinationIcebergResourceAllocationDefault
    optional resource requirements to run workers (blank for unbounded allocations)
    JobSpecifics List<DestinationIcebergResourceAllocationJobSpecific>
    Default DestinationIcebergResourceAllocationDefault
    optional resource requirements to run workers (blank for unbounded allocations)
    JobSpecifics []DestinationIcebergResourceAllocationJobSpecific
    default_ DestinationIcebergResourceAllocationDefault
    optional resource requirements to run workers (blank for unbounded allocations)
    jobSpecifics List<DestinationIcebergResourceAllocationJobSpecific>
    default DestinationIcebergResourceAllocationDefault
    optional resource requirements to run workers (blank for unbounded allocations)
    jobSpecifics DestinationIcebergResourceAllocationJobSpecific[]
    default DestinationIcebergResourceAllocationDefault
    optional resource requirements to run workers (blank for unbounded allocations)
    job_specifics Sequence[DestinationIcebergResourceAllocationJobSpecific]
    default Property Map
    optional resource requirements to run workers (blank for unbounded allocations)
    jobSpecifics List<Property Map>

    DestinationIcebergResourceAllocationDefault, DestinationIcebergResourceAllocationDefaultArgs

    DestinationIcebergResourceAllocationJobSpecific, DestinationIcebergResourceAllocationJobSpecificArgs

    JobType string
    enum that describes the different types of jobs that the platform runs. must be one of ["getspec", "checkconnection", "discoverschema", "sync", "resetconnection", "connection_updater", "replicate"]
    ResourceRequirements DestinationIcebergResourceAllocationJobSpecificResourceRequirements
    optional resource requirements to run workers (blank for unbounded allocations)
    JobType string
    enum that describes the different types of jobs that the platform runs. must be one of ["getspec", "checkconnection", "discoverschema", "sync", "resetconnection", "connection_updater", "replicate"]
    ResourceRequirements DestinationIcebergResourceAllocationJobSpecificResourceRequirements
    optional resource requirements to run workers (blank for unbounded allocations)
    jobType String
    enum that describes the different types of jobs that the platform runs. must be one of ["getspec", "checkconnection", "discoverschema", "sync", "resetconnection", "connection_updater", "replicate"]
    resourceRequirements DestinationIcebergResourceAllocationJobSpecificResourceRequirements
    optional resource requirements to run workers (blank for unbounded allocations)
    jobType string
    enum that describes the different types of jobs that the platform runs. must be one of ["getspec", "checkconnection", "discoverschema", "sync", "resetconnection", "connection_updater", "replicate"]
    resourceRequirements DestinationIcebergResourceAllocationJobSpecificResourceRequirements
    optional resource requirements to run workers (blank for unbounded allocations)
    job_type str
    enum that describes the different types of jobs that the platform runs. must be one of ["getspec", "checkconnection", "discoverschema", "sync", "resetconnection", "connection_updater", "replicate"]
    resource_requirements DestinationIcebergResourceAllocationJobSpecificResourceRequirements
    optional resource requirements to run workers (blank for unbounded allocations)
    jobType String
    enum that describes the different types of jobs that the platform runs. must be one of ["getspec", "checkconnection", "discoverschema", "sync", "resetconnection", "connection_updater", "replicate"]
    resourceRequirements Property Map
    optional resource requirements to run workers (blank for unbounded allocations)

    DestinationIcebergResourceAllocationJobSpecificResourceRequirements, DestinationIcebergResourceAllocationJobSpecificResourceRequirementsArgs

    Import

    $ pulumi import airbyte:index/destinationIceberg:DestinationIceberg my_airbyte_destination_iceberg ""
    

    To learn more about importing existing cloud resources, see Importing resources.

    Package Details

    Repository
    airbyte airbytehq/terraform-provider-airbyte
    License
    Notes
    This Pulumi package is based on the airbyte Terraform Provider.
    airbyte logo
    airbyte 0.8.0-beta2 published on Thursday, Mar 27, 2025 by airbytehq